-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cd oneformer/modeling/pixel_decoder/ops
sh make.sh
#4
Comments
Hi @jinfagang, thanks for your interest in our work. By default, we compile the code on a GPU machine. If you want to run on a CPU, you will need to make changes to two files:
After you change these two files, don't run |
@praeclarumjj3 hello, why not just make it support out-ofbox? by simply judge on if torch.cuda.is_avaiable()? |
Thanks for your suggestion. I plan on doing that in a future commit. |
Hi @jinfagang, I have made the changes in the most recent commit. It should work for you now. |
@praeclarumjj3 hello, cpu got some error:
|
btw, 15s generated an image on CPU is quite slow.. |
Can you try saving the output instead by specifying the |
You are free to optimize the method. 15s seems fair to me, considering it's a large model, and I assume you must be running for the panoptic task. If yes, that gives you all three predictions, so it's 5 seconds per output. |
Feel free to re-open if you face any issues. |
Hi @praeclarumjj3 , about the changes to |
cd oneformer/modeling/pixel_decoder/ops
sh make.sh
got
\how to run on CPU?
The text was updated successfully, but these errors were encountered: