This repository was archived by the owner on Sep 19, 2022. It is now read-only.

Description
I'm completely new to kubeflow, but the main advantage of it from my perspective is the usage of pipelines to setup production ready end to end machine learning workflows.
I'm trying to port production code that we have into a kubeflow pipeline. However, one of the steps is to perform distributed training. I understand how to set up a pipeline that uses a small number of GPUs within the same pod, but still don't get how to integrate distributed training with a PyTorchJob into a kubeflow pipeline.
Is this possible? What are the current solutions for this?