Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do you have any example of deployment mediapipe to kubernetes in distributed mode? #49

Closed
dkozlov opened this issue Aug 25, 2019 · 4 comments

Comments

@dkozlov
Copy link

dkozlov commented Aug 25, 2019

Hello,

Do you have any example of deployment mediapipe to kubernetes in distributed mode? Or any plan to have an example with deployment of mediapipe in distributed mode to several machines?

@dkozlov dkozlov changed the title Do you have any example of deployment mediapipe to kubernetes? Do you have any example of deployment mediapipe to kubernetes in distributed mode? Aug 25, 2019
@mgyong
Copy link

mgyong commented Aug 25, 2019

@dkozlov We have Dockerfile that you can use to build docker containers of MediaPipe.
Typically, MediaPipe graph runtime is one to one machine instance (i.e. parallel independent machines running multiple independent MediaPipe graphs). There is currently no support for MediaPipe in distributed mode (one graph on several machines).
What is the distributed mode you are looking for? and what use case? Media processing of lots of video files

@dkozlov
Copy link
Author

dkozlov commented Aug 26, 2019

For example if you have a pipeline which should be executed in the real time and consist of large neural network models which could not be placed on the single instance. Model parallelism, lower-latency parallel inference (at batch size 1). As I remember Tensorflow could be runned in distributed mode for model parallelism. Do you have any example of integration mediapipe with https://github.com/tensorflow/mesh?

@mgyong
Copy link

mgyong commented Aug 27, 2019

@dkozlov Unfortunately we don't have such an example working with TensorFlow Mesh. Is your model that big that it you need model parallelism? What's the use cast?

@dkozlov
Copy link
Author

dkozlov commented Aug 27, 2019

Thank you for response @mgyong! I have several different models which depend on each other and connected into single complicated pipeline and do not fit in single node. My workload is low-latency parallel inference with batch size equals to single image. Full pipeline latency should be less than 1 second. Actually I am not using the model parallelism it is just separately connected models but I am curious if mediapipe could help me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants