-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HDFS cluster can't work with kubernetes + flannel in coreos #363
Comments
@ztao1987 I don't know much about HDFS but I think what is happening is this. Because namenode is accessed as a service, the connection goes through kube-proxy, so all connections are actually coming from it (and its connections bind to docker0 interface). I think it would be best to ask Kubernetes community for best practices for working around this problem. |
@eyakubovich Thanks for you reply. I already posted this question in Kubernetes community. I checked this google group thread, it seems he workarounds it by using namenode pod ip. But in my case, namenode pod ip is translated into somethings like bridge name, do you happend to known why? |
Not sure what you mean |
I mean if I used namenode actual pod ip to start datanode, the datanode regcognize it as "k8s_POD-2fdae8b2_namenode-controller-keptk" and failed to start. I got workaround for this from @luqman in the google group thread you provide. He mentioned that I need DNS to solve this problem. |
Is this issue caused by flannel or kubernetes? I mean "if using namenode actual pod ip to start datanode, the datanode regcognize it as "k8s_POD-2fdae8b2_namenode-controller-keptk"" |
I got the answer from kubernetes community. use the latest kubernetes and pass the params --proxy-mode=iptables to kube-proxy start command, HDFS cluster works now |
@ztao1987 Nice! We plan to make github.com/coreos/coreos-kubernetes default to iptables soon. |
@ztao1987 |
@ksr1 Assumed you already have a namenode/datanode docker image, and then deploy it using ReplicationController/deamonset in k8s. Rember to start kube-proxy using params --proxy-mode=iptables. If you want to do data persistent, you should use volumes to volumeMounts the data whatever you want. No special things for other parts. |
@ztao1987 Thanks for the reply. Will try setting the params in Kubernetes and try the setup. |
I think this is resolved. Please comment if not and I can reopen. |
Hi,
I deployed kubernetes with flanneld.service enabled in coreos. And then I started hdfs namenode and datanode via kubernetes replication-controller. I also created kubernetes service for namenode. The namenode service ip is 10.100.220.223, while the pod ip of namenode is 10.20.96.4. In my case, one namenode and one datanode happens to be on same host. And namenode pod and datanode pod can ping each other successfully.
However I encountered the following two problems when trying to start hdfs datanode:
I tried to search this issue over the network, but nothing helps me. Could you please help me out of this? Thanks.
The text was updated successfully, but these errors were encountered: