-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: Unable to find a valid cuDNN algorithm to run convolution #17
Comments
Thanks a lot for your interest in our work. Could you please provide more details about your use case, e.g., the command, GPU memory, CUDA version, and cuDNN version? I just searched around with your error message, it may come from not-enough GPU memory, see this link. If this is the case, you could try to use small Hope this helps. |
I think my problem is not compatible with cuda and cudnn version in my device. So I will try to change those version compatible with my device and ask again for more details. Please don't close this issue, and thank you for your quick answer. |
I solve this problem by changing cudatoolkit=10.2 to cudatoolkit=11.1. Thank you for your help. |
I'm sorry, but when I apply rendering a video according to your code
I got this error,
|
Do you mind posting the full traceback if possible? That would be helpful here. And what is the version of your package
So this may come from package incompatibility. If this is the case, you could consider either downgrading the |
Thanks for your help. The problem is the version of imageio. When I see environment.yml for setting environment, it install the newest version of imageio. But when I change imageio newest version to 2.9.0, it is solved. |
Thanks a lot for confirming the issue. Glad that it works now. |
Avoid imageio version incompatibility (#17)
Hello, thanks for your kind reply. I have other questions at Generative MultiPlane Image models. But I'd like to train my custom dataset(RealEstate10K frame images), and I guess it will not work because data domain is different. My First Question is that I'd like to use your model at different domain datasets. In that case, I think I have to re-train StyleGANv2 to my custom datasets. Is it right?? |
Hello, this is Albert. And I have other questions about your model.
Could you please check closed issues at your Git Repository and reply to
those??
Thank you for your kind reply.
Sincerely Albert.
2023년 2월 19일 (일) 오후 2:41, Xiaoming Zhao ***@***.***>님이 작성:
… Thanks a lot for confirming the issue. Glad that it works now.
—
Reply to this email directly, view it on GitHub
<#17 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AJPACNDQOMGC4MUXXANJYLTWYGXB3ANCNFSM6AAAAAAVAFPPNA>
.
You are receiving this because you modified the open/close state.Message
ID: ***@***.***>
--
|
Hi Albert, to answer your questions:
Not necessary. You can directly train GMPI from the scratch. At least for FFHQ, we tried to train from the scratch before. The results look good. It will just take quite a long time as there is no prior from the pre-trained checkpoints anymore. Another caveat I want to mention: GMPI needs a camera pose for the discriminator to be conditioned on. Essentially, this means that you need to be able to somehow provide camera poses for images from RealEstate10K. Frankly speaking, how to adapt GMPI to and whether GMPI will work well on such indoor scene data without further modification/adaptation is still an open research problem. Only experiments could tell.
Assume you use the model variant that conditions on normalized depth as mentioned here. Then this line could be a good starting point for you to understand how alphas are computed, i.e., checking how
As the variable
Hope these may help. |
Hello, I'm interested in your model. So I'm trying to rendering image by pretrained model.
I'm trying to render image and video at FFHQ512 dataset.
According to docs/TRAIN_EVAL.md, I understand that if I make ffhq512x512.zip by styleganv2-ada-pytorch and ffh1512_deep3dface_coeffs, then
I can render image, video and extrach mesh.
But when I apply render a image to get mpi results, this error came out.
I locate pretrained model files at ml-gmpi/ckpts/gmpi_pretrained/FFHQ512.
Could you tell me why this problem happen.
Thanks for your attention.
The text was updated successfully, but these errors were encountered: