Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running it on GPU on google collab ? #84

Closed
basicvisual opened this issue May 7, 2021 · 12 comments
Closed

Running it on GPU on google collab ? #84

basicvisual opened this issue May 7, 2021 · 12 comments
Labels
enhancement New feature or request priority Issues with priority question Further information is requested

Comments

@basicvisual
Copy link

Hi , I managed to run your code on google collab. It worked fine but it always choose the "CPU" as default value . I was wondering if there is a setting in the code that has to be changed in order to use the GPU ?

@stefanopini
Copy link
Owner

Hi @basicvisual , awesome! Are you willing to share the notebook so that I can add it to this repository (with proper referencing)?

Regarding running it on GPU, have you changed the Colab "Runtime type" from CPU to GPU or TPU? Does it still run on CPU even choosing one of these options?
You can check if you have access to a GPU/TPU using torch.cuda.is_available().

In general, you can set the device you want to use when initializing SimpleHRNet (as done here).
For using a GPU, set it to device=torch.device('cuda').

@basicvisual
Copy link
Author

basicvisual commented May 10, 2021

Hi @stefanopini , its a very very experimental notebook. Let me have a look if i can make it better , as of now its just downloading the repo , adding the weights ( lot of manual steps) , Can i come back to you in some time with a better version or I can already send a link to the barebone version of the notebook.
For some reason yes it still runs in CPU even if the runtime is changed to GPU . So i was a bit confused .

@stefanopini
Copy link
Owner

Sure! If you are willing to share the notebook, feel free to share it at your convenience/when you think it is ready! 🙂

Regarding the issue, that's weird. It should run on the GPU if available.
Does torch.cuda.is_available() return True? Is it running on CPU even setting device=torch.device('cuda')?
If you'd like, I can have a look at the current notebook to try to understand what is causing the issue. In this case, just make a copy and share it with me.

@basicvisual
Copy link
Author

basicvisual commented May 12, 2021

Hi this is the link for the colab notebook , as mentioned its very early stage : here

@stefanopini
Copy link
Owner

Wonderful, thank you!

I've been able to have it working and it did run on GPU.
I just had to switch twice from CPU to GPU runtime, but after that import torch; print(torch.cuda.is_available()) gave me True and the script run on GPU afterwards.
Let me know if this works for you too.

In the next days/weeks, I'll continue working on it with the goal of adding it to this repository, thank you very much!

@basicvisual
Copy link
Author

Thank you . I think one of the steps which is manual , is pointing to the pre trained weights. I am wondering if one can mount the pre trained google drive then one can exclude manual download and upload to the weights.

Another question or possibility is to see if multiple videos can be executed in one go ? like have a list of videos to process ? I think that would require changes in the live-demo.py code ?

@stefanopini
Copy link
Owner

I think it is possible to mount a Google Drive folder, but it requires an API key, so it wouldn't be automatic also in this case.
However, I think that the current code is quite handy: you just need to replace a code to download/use a specific file.

I'm afraid it would. But one could make a for loop on a list of file and call the live-demo.py script for every file, or embed the code of the live-script.py script into the notebook instead of calling the script.

@basicvisual
Copy link
Author

m afraid it would. But one could make a for loop on a list of file and call the live-demo.py script for every file, or embed the code of the live-script.py script into the notebook instead of calling the script.

I think that would be some what straight forward. I was wondering , we need to change the output.avi ( part of the code as well) Otherwise it will overwrite the video ( i think so )

@stefanopini
Copy link
Owner

Yes, definitely

@stefanopini stefanopini added enhancement New feature or request priority Issues with priority question Further information is requested labels Jun 7, 2021
@wuyenlin
Copy link

I think it is possible to mount a Google Drive folder, but it requires an API key, so it wouldn't be automatic also in this case.
However, I think that the current code is quite handy: you just need to replace a code to download/use a specific file.

Hello, I have tried to rearrange the code provided by @basicvisual in a separate notebook here.
As for the pre-trained weights, yes you need to mount your drive by manually inputting an API key after either

  1. manually downloading or uploading the file, or
  2. making a copy from the official drive to your own drive.

Details are given in the notebook.
As for now, the code loads a COCO2017 image and detects keypoints.
I think it serves as a good starting point for those interested in this repository.

@stefanopini
Copy link
Owner

Wow, thank you!
It is definitely a great starting point for people interested in this repo.
I'll share it on the README.md with proper attribution.

@stefanopini
Copy link
Owner

stefanopini commented Dec 29, 2022

New, updated notebook supporting colab, testing also TensorRT and YOLOv5 added to the master branch! (See #100 )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request priority Issues with priority question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants