Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better GPU support #6

Closed
jlewi opened this issue Jul 26, 2017 · 4 comments
Closed

Better GPU support #6

jlewi opened this issue Jul 26, 2017 · 4 comments

Comments

@jlewi
Copy link
Contributor

jlewi commented Jul 26, 2017

We should make it easier to use GPUs.

Right now to use GPUs the user would have to add appropriate volume mounts to the PodSpec in the TfJob to mount the GPU devices from the host and set other specs like environment variables if needed.

I think we should have a higher level API. For example

type TFReplicaSpec struct {
  ...
  Gpus []GpuSpec
}

type GpuSpec struct {
   Type string
   Count int32
}

The TfJob controller could then be instantiated with the necessary information to add the appropriate volume mounts and scheduling information to the pods.

@wbuchwalter
Copy link
Contributor

Hey!
If the user is not responsible to mount the drivers, what's your vision to deal with different driver versions, and different install locations?

@jlewi
Copy link
Contributor Author

jlewi commented Jul 26, 2017

Hi

So I think it would be the responsibility of the "ops" person who deploys the TfJob operator to specify the location of the drivers on the host machine and appropriate mount points in the cluster.

This assumes that all GPU nodes in the cluster use the same driver version and have the drivers installed in the same location.

Supporting more multiple driver versions really depends on whether K8s eventually supports this.

For the TfJob operator the goal is really to just cut down on some of the boilerplate when specifying GPU jobs.

So with the current operator you can write a TfJob spec to use GPUs like so

- apiVersion: mlkube.io/v1beta1
  kind: TfJob
  spec:
    replica_specs:
    - replicas: 1
      template:
        metadata:
          creationTimestamp: null
        spec:
          containers:
          - args:
            - --gpu
            env:
            - name: LD_LIBRARY_PATH
              value: /usr/local/cuda/lib64
            image: gcr.io/project/tf_smoke_cmle-375-20:latest
            name: tensorflow
            resources: {}
            securityContext:
              privileged: true
            volumeMounts:
            - mountPath: /dev/nvidia0
              name: dev-nvidia
            - mountPath: /dev/nvidiactl
              name: dev-nvidiactl
            - mountPath: /dev/nvidia-uvm
              name: dev-nvidia-uvm
          restartPolicy: OnFailure
          volumes:
          - hostPath:
              path: /dev/nvidia0
            name: dev-nvidia
          - hostPath:
              path: /dev/nvidiactl
            name: dev-nvidiactl
          - hostPath:
              path: /dev/nvidia-uvm
            name: dev-nvidia-uvm
      tf_port: 2222
      tf_replica_type: MASTER

Since the mount paths would be the same for all TfJobs there's no reason to make the user specify it when creating individual jobs. The user could just specify the following.

- apiVersion: mlkube.io/v1beta1
  kind: TfJob
  spec:
    replica_specs:
    - replicas: 1
      template:
        metadata:
          creationTimestamp: null
        spec:
          containers:
          - args:
            - --gpu
      tf_port: 2222
      tf_replica_type: MASTER
      Gpus: 
          - type: nvidia-tesla-k80
            count: 1

The TfJob operator would be instantiated with the information it needs to add to the actual JobController specs to use GPUs. This would include adding the volume mounts showed above and scheduling constraints so it gets scheduled on GPU nodes.

@jlewi
Copy link
Contributor Author

jlewi commented Jul 31, 2017

PR #9 is out for review.

Its pretty close to what I suggested above. Main difference is we look at container resources and limits to determine if GPUs are required rather than introducing new fields to indicate when GPUs are desired.

@jlewi
Copy link
Contributor Author

jlewi commented Aug 1, 2017

PR #9 is merged.

@jlewi jlewi closed this as completed Aug 1, 2017
kuikuikuizzZ added a commit to kuikuikuizzZ/tf-operator that referenced this issue Jan 11, 2021
feat(status): updateStatus-> update
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants