You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just discovered this project and have been trying out some of its features. I have to say that I really like it so far.
I would love to see the feature of resource constrained jupyter notebooks:
At the moment I can request a GPU accelerated notebook which blocks an entire GPU.
I would like to host multiple notebooks on a single GPU however.
Maybe with a VRAM or processing constraint per notebook.
I think that on the docker side, it should be possible to host multiple instances on a single GPU, correct me if I'm wrong.
Thank you!
The text was updated successfully, but these errors were encountered:
I would like to host multiple notebooks on a single GPU however.
Sadly, this is not easy to support. That said, within the same notebook server, you can launch multiple kernels, but the GPUs themselves don't allow for strong isolation (absent advanced GPU virtualization techniques like Bitfusion, that carefully manage limits on GPU RAM, on a per-process basis, or by taking advantage of Multi-Instance GPU support that have recently become available with the A100s) so we don't recommend more than one active kernel (that needs to use the GPU(s)) at any given time.
Hi, greetings from Germany!
I just discovered this project and have been trying out some of its features. I have to say that I really like it so far.
I would love to see the feature of resource constrained jupyter notebooks:
At the moment I can request a GPU accelerated notebook which blocks an entire GPU.
I would like to host multiple notebooks on a single GPU however.
Maybe with a VRAM or processing constraint per notebook.
I think that on the docker side, it should be possible to host multiple instances on a single GPU, correct me if I'm wrong.
Thank you!
The text was updated successfully, but these errors were encountered: