Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use headless services for Training jobs #40

Closed
jlewi opened this issue Sep 19, 2017 · 4 comments
Closed

Use headless services for Training jobs #40

jlewi opened this issue Sep 19, 2017 · 4 comments

Comments

@jlewi
Copy link
Contributor

jlewi commented Sep 19, 2017

We should use headless services. We don't need load balancing since there is a single pod that is the backend for each service. I think this should provide some performance benefits but I don't know how much.

@jlewi
Copy link
Contributor Author

jlewi commented Dec 7, 2017

@DjangoPeng Is this the right thing to do? Do you have other suggestions on how we might improve networking efficiency?

@DjangoPeng
Copy link
Member

DjangoPeng commented Dec 7, 2017

@jlewi I think LB is sort of useful for TensorFlow Serving jobs. Generally we launch a model (eg: face recognition) in a TensorFlow Serving Pod when the number of requests is bearable. But, with the number of requests increasing, a single pod is not enough to support them. Based on the assumption, I prefer launching TensorFlow Serving jobs as Deployment. On the one hand, Deployment is good at scaling up and down. On the other hand, Deployment would recover the dead pod automatically.

Do you have other suggestions on how we might improve networking efficiency?

I think the point is the implementation of pod networks. I know many Kubernetes users set Flannel overlay network by default, but Flannel is not a good choice for TensorFlow and other DL workload. If we really want to improve the networking efficiency, we'd better use other network implementations, such as host network.

@jlewi jlewi changed the title Use headless services Use headless services for Training jobs Dec 7, 2017
@jlewi
Copy link
Contributor Author

jlewi commented Dec 7, 2017

Sorry I should have clarified that for headless services I only meant in the context of training jobs. For training jobs we need to assign stable names to each replica. So for a given replica there should be only 1 pod backing it. So I think load balancers are just introducing overhead.

Regarding network performance, is there a simple benchmark that can be run to measure network performance in a way that's relevant to TF/DL?

@jlewi jlewi added this to the Kubecon Europe milestone Jan 25, 2018
@lluunn
Copy link
Contributor

lluunn commented Mar 13, 2018

I will take a stab at this one

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants