Consider adding `conjure-up deis` #520

Closed
castrojo opened this Issue Dec 14, 2016 · 7 comments

Comments

Projects
None yet
5 participants

If conjure-up kubernetes is "give me infrastructure" then we're still missing the nice developer experience. Ben and I were thinking it'd be great if we just gave people a fully working PaaS so after deployment developers can just get to work:

So ... https://deis.com/docs/workflow/installing-workflow/

The idea being conjure-up deis would give you canonical-kubernetes, and then conjure-up would follow those steps to get deis up and running, help the user create a new user so that they can start deploying applications right away.

We'd need to consult with the deis folks to do more production-grade things, but I think this would be a great proof of concept of getting someone from zero-to-paas quickly.

Contributor

mikemccracken commented Dec 14, 2016

Sounds like a good improvement - as I understand it, this involves installing client programs for helm and deis-workflow on the system that you're running conjure-up on, and then using those to spin up a set of deis pods on the k8s, as per that link, right?

I'm thinking this fits best as an optional 'step' on the kubernetes spell (so conjure-up canonical-kubernetes would have a button you could press to have it do the deis installation, after the k8s is up and running). How does that sound?

@battlemidget battlemidget self-assigned this Dec 14, 2016

Contributor

battlemidget commented Dec 19, 2016

I've been trying to get deis/helm to work (well just helm so far) and having a few issues that I think is reported here:

kubernetes/kubernetes#22770

Stemmed from: kubernetes/helm#1455

Ive got my environment setup:

ubuntu@tupac:~$ ~/kubectl --kubeconfig=.kube/config.conjure-up cluster-info
Kubernetes master is running at https://10.0.8.52:443
Heapster is running at https://10.0.8.52:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://10.0.8.52:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://10.0.8.52:443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Grafana is running at https://10.0.8.52:443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://10.0.8.52:443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

However, trying to run a helm install will fail:

ubuntu@tupac:~$ KUBECONFIG=~/.kube/config.conjure-up ./ghost/linux-amd64/helm repo add deis https://charts.deis.com/workflow
"deis" has been added to your repositories
ubuntu@tupac:~$ KUBECONFIG=~/.kube/config.conjure-up ./ghost/linux-amd64/helm install deis/workflow --namespace deis
Error: forwarding ports: error upgrading connection: Upgrade request required
ubuntu@tupac:~$ KUBECONFIG=~/.kube/config.conjure-up ./ghost/linux-amd64/helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "deis" chart repository
Update Complete. ⎈ Happy Helming!⎈ 
ubuntu@tupac:~$ KUBECONFIG=~/.kube/config.conjure-up ./ghost/linux-amd64/helm install deis/workflow --namespace deis
Error: forwarding ports: error upgrading connection: Upgrade request required

@chuckbutler, any idea here?

This was discovered by a community member. I think the culprit is the nginx load balancer if you have the api-lb deployed. If you remove that and instead use the k8s master => k8s worker relationship directly, this goes away.

Can you give that a go (with kubernetes-core bundle) and see if that resolves the issue? I think in our efforts to support HA master before we landed the actual HA Master code, has introduced this problem with an incorrectly configured load-balancer vs what helm is expecting. This begs the notion that we should probably be using HAProxy as it has better support for these types of integration.

Contributor

battlemidget commented Jan 28, 2017

@chuckbutler could you outline the steps I would need to run to experiment getting this setup to work? Right now im using the localhost provider with ceph as my backend storage. Is removing the load balancer something I need to do with kubectl or is that a juju command?

@battlemidget battlemidget modified the milestone: later Jan 30, 2017

If you've deployed kubernetes-core, no additional changes need to be made. HELM should "just work" at this point. If you're using CDK, you'll need to remove the api-load-balancer charm, and add the kube-api-endpoint relation between kubernetes-master and kubernetes-worker. This will cause the kubeconfig to regenerate properly, and you can then use that kubeconfig for HELM deployments.

rahworkx commented Feb 10, 2017

@chuckbutler, we just applied your work around and are now stuck at registering at the deis cluster fqdn. Any suggestions for what to do?

deis register deis.kjuju.domainame.com
Error: Get https://deis.kjuju.domainname.com/v2/: dial tcp 54.145.155.12:443: getsockopt: operation timed out

chuckbutler commented Feb 10, 2017

This looks suspiciously like something didn't happen as we expected it to.

https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/#common-problems

This should give you a good overview of how to "manually" fix the kubeconfig and get you moving with helm charts on CDK. If this is not the case, I certainly need to move you out of this bug and into the bundle-canonical-kubernetes issue tracker and start aggregating reasons why it might be failing and find the root cause.

@battlemidget battlemidget modified the milestone: later Sep 4, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment