Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Use headless services for Training jobs #40
@jlewi I think LB is sort of useful for TensorFlow Serving jobs. Generally we launch a model (eg: face recognition) in a TensorFlow Serving Pod when the number of requests is bearable. But, with the number of requests increasing, a single pod is not enough to support them. Based on the assumption, I prefer launching TensorFlow Serving jobs as Deployment. On the one hand, Deployment is good at scaling up and down. On the other hand, Deployment would recover the dead pod automatically.
I think the point is the implementation of pod networks. I know many Kubernetes users set Flannel overlay network by default, but Flannel is not a good choice for TensorFlow and other DL workload. If we really want to improve the networking efficiency, we'd better use other network implementations, such as host network.
Sorry I should have clarified that for headless services I only meant in the context of training jobs. For training jobs we need to assign stable names to each replica. So for a given replica there should be only 1 pod backing it. So I think load balancers are just introducing overhead.
Regarding network performance, is there a simple benchmark that can be run to measure network performance in a way that's relevant to TF/DL?