You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello everyone, I am a newcomer here in this community.
I have only 2 GB GPU but my friend has 4GB so I generally train my model on his machine. I normally use Jupyter Notebook and I code in Python. Recently I came to know about "Running a notebook server" and I set up that. Now I can remotely run a jupyter notebook on my machine (client) while the resources are used from my friend's machine (server).
4 GB of GPU is also not sufficient for me. I am curious if I could remotely use GPUs from many of my friends' machine and cluster them and then remotely run the jupyter notebook. Its similar to the server-client model that we previously created but I wish to extend it to multiple "shared-servers" so that I can use all of their GPU's in collaborative and distributive fashion. It is a kind of 'many-to-one' server (many) and client (one) model.
Can anybody help me how can I achieve that in Jupyter Notebook server ? Or is there any option to remotely use GPU from different machines and run my python code remotely ? It will be very kind if you could forward me to appropriate place where I could find a solution
Thanks
The text was updated successfully, but these errors were encountered:
You might be able to do that with Python code inside Jupyter, but Jupyter itself isn't the right technology to do that. What you want is some kind of parallel computing; there are many frameworks for that - ipyparallel is one that lives under the Jupyter umbrella.
It's a complicated problem, because the computer has to work out how to divide up the data and the tasks efficiently. Different tools expose different levels of control. Whole books have been written about this.
Hello everyone, I am a newcomer here in this community.
I have only 2 GB GPU but my friend has 4GB so I generally train my model on his machine. I normally use Jupyter Notebook and I code in Python. Recently I came to know about "Running a notebook server" and I set up that. Now I can remotely run a jupyter notebook on my machine (client) while the resources are used from my friend's machine (server).
4 GB of GPU is also not sufficient for me. I am curious if I could remotely use GPUs from many of my friends' machine and cluster them and then remotely run the jupyter notebook. Its similar to the server-client model that we previously created but I wish to extend it to multiple "shared-servers" so that I can use all of their GPU's in collaborative and distributive fashion. It is a kind of 'many-to-one' server (many) and client (one) model.
Can anybody help me how can I achieve that in Jupyter Notebook server ? Or is there any option to remotely use GPU from different machines and run my python code remotely ?
It will be very kind if you could forward me to appropriate place where I could find a solution
Thanks
The text was updated successfully, but these errors were encountered: