-
Notifications
You must be signed in to change notification settings - Fork 390
Conversation
deploy/kube-templates/heketi-turnkey.yaml: the deployment itself deploy/kube-templates/heketi-turnkey-config.yaml: sample configurgation deploy/Dockerfile: Dockerfile for the heketi-turnkey image
I'm still relatively new to containers, so please help me understand how this works. You would use the included Dockerfile to build a container, and then deploy the ConfigMap and the Pod (which uses the container you just built) which executes |
Hi jarrpa, The Docker image is an automatic build available on docker hub as lenart/heketi-turnkey. (If you want you can build your own image using that Dockerfile. In that case you will need to download the heketi-turnkey.yaml file and modify it to point to your image.) It works on the kubernetes cluster you create the pod on. It uses https://kubernetes as the api server url. (A service named 'kubernetes' already exists in all kubernetes clusters pointing to the api server and kube-dns makes it possible for pods to resolve that name to the correct IP address of the master/api server pod.) The pod is configured to use the default service account in the default namespace. The level of access that service account provides depends on your cluster deployment. If that does not have the needed role and you are not happy to assign it then you may create a new service account, assign the needed access to that and then modify the heketi-turnkey.yaml file to use that service account (the (Apologies for not providing too much documentation yet ... I wanted to get it out quick) |
Hi @lenartj ,
|
Hi @harshal-shah, Could you post the logs of the heketi pod? Could you try removing the Have you got the |
Hi,
|
Could there be a problem with your cluster network? Also, can you try |
The service is accessible from nodes :
Also the pod is picked up by the service,
Above results are from a new cluster which I have spun up. I had deleted old cluster yesterday night. |
For now, I managed to proceed ahead by making heketi service of type LoadBalancer. Thanks for your help |
@harshal-shah, are you running on GKE or AWS or ...? I am happy that it works :-) There should be no need for a LoadBalancer in front of this service though so I suggest you keep investigating what's odd. At this point I'm almost certain it's a network setup issue. |
@lenartj yes, I'm using GKE. As we already found that the service URL works from nodes, I will try to open firewall to all IP's to check if that is the issue. Will keep you updated. |
Oh, I understand, so you need to be able to provision volumes from outside the cluster. Keep in mind that LoadBalancer is not free on GKE, comes out to be min. ~$20/month/lb. You could make the Service type NodePort for example. |
Yes, I was thinking of that as well. Making the service as NodePort seems like the best option. Volume provisioning within the cluster didn't work for me as well. |
I've actually run into the same problem as @harshal-shah : even worse though, i cannot curl clusterip:8080/hello, it times out as well. i can curl /hello on the service endpoint though from 'kubectl get endpoints heketi' i'm guessing also that i have some network issues? |
Another successful way that worked for me was exposing the heketi service as a NodePort (thanks @lenartj ) and mention public IP of any host + Node Port in storage class definition. Is it possible that the glusterfs provisioner is calling heketi service from outside of kubernetes network rather than from within ? |
@harshal-shah, @kwtalley, thanks a lot for testing.. I can see the problem now. You are both running on GKE, and on GKE the master is outside of the cluster. The provisioning requests are originated from the apiserver, which is hosted by Google for you.
35.187.11.226 was the IP of the Kubernetes apiserver hosted by Google. So the cluster IPs are not reachable from that apiserver that's why it's worked for you when you created a LoadBalancer or a NodePort and referred to the public ip of a node. |
@lenartj Can you host one more image version of image |
well i've been digging into this more and haven't made much progress. my setup is actually a bare-metal cluster in my own lab. for some reason i can only reach a service ip from the node hosting the service pod. the other nodes in the cluster cannot reach it. been searching in iptables rules and getting some tcp dumps and seeing the following: |
turns out kube-proxy did not have the --cluster-cidr=flannel-network/mask entry on the worker nodes. adding that fixed the services! @harshal-shah check if this is set on your workers kube-proxy config |
Looking over this again, I'm not sure that this is a better solution than #155, which seems more robust, less fiddly, and better able to handle errors. The only downside I see to the other solution is requiring ansible as a dependency. @lenartj can you make an argument as to the pros of this solution over the other one? |
Closing this PR. It can be reopened if requested. |
This is extremely useful. |
Nope, still isn't. |
This is a proof of concept deployment using only kubernetes objects (and a docker image, see below).
https://asciinema.org/a/103212
While it is useful for me I am not sure how useful it is in general :-)
It solves issue #161
This change is![Reviewable](https://camo.githubusercontent.com/23b05f5fb48215c989e92cc44cf6512512d083132bd3daf689867c8d9d386888/68747470733a2f2f72657669657761626c652e696f2f7265766965775f627574746f6e2e737667)