-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hardware transcoding #12
Comments
Yep this won't pass through any additional directories needed for hardware
acceleration.
Have you got any info on how you'd enable this normally (ideally how to do
it in Docker?). We'd then just want some way to set a nodeSelector of some
sort for the pod to target the nodes with GPUs.
…On Tue, 9 Jan 2018 at 20:56, Cole Kennedy ***@***.***> wrote:
It looks like the current implementation does not support gpu hardware
transcoding. From my understanding, we just need the NVIDIA/AMD drivers in
the plex containers to make this happen. I can work on this if you have
some direction.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#12>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAMbPy6NnY2I2Ls25rS2Z5UQDwqYm-SRks5tI9J0gaJpZM4RYbZu>
.
|
I should add that you are also able to configure the image that kube-plex uses for the transcoder, so if you were to provide an image of the same version but with the drivers added in, it should work. There is some more information on using GPUs with Kubernetes here: https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/. Looks like we need to add the following to the container spec;
This could fold into #6 |
I'll dive into this issue this weekend. |
@colek42, any luck on this one? |
I played with it for about an hour, did not make any progress. Someday I'll hack at it again. |
This may be of interest, I won't have time to try it out for at least a week, but basically it goes over over either using nvidia-docker hooks for nVidia devices, and DRI hooks for mesa for intel devices. The latter is probably the most interesting considering many clusters will probably be placed on public clouds running predominantly intel CPUs. There's also some interesting reference links. |
I got this working in the In my case one of the k8s nodes is running as a VM to a guest with a Skylake CPU that is exposing the intel GPU to the ubuntu VM running the k8s node (via iommu & gvt device virtualization). I'm also running the Intel GPU Device Plugin for Kubernetes in order to facilitate exposing the GPU device to pods that request it in a resource limit, e.g.
'enabling' this was as easy as adding the above resource setting to the values for this helm chart (after deploying the necessary intel GPU device plugin daemonset) |
I have also got this working with an old nvidia GPU using the nvidia device plugin. It requires:
This works fine even when running plex as a non-root user (as is default in the pmsinc docker images) I also tried adding 2 nvidia GPUs (as I had a spare lying around). These were visible when running
|
I'm working on getting this to work with the kube-plex on multiple pods but I haven't quite figured out the syntax for adding the limits/requests in the pod creation. My thought was to add them as environment variables on the kube-plex container but I'm stuck having little familiarity with using the client-go to creating the resources this way. |
I'll see if I can take a look at this as well, as this is a feature I would like to make use of if possible. |
It looks like the current implementation does not support gpu hardware transcoding. From my understanding, we just need the NVIDIA/AMD drivers in the plex containers to make this happen. I can work on this if you have some direction.
The text was updated successfully, but these errors were encountered: