-
Notifications
You must be signed in to change notification settings - Fork 853
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the input_size and out_size of big-lama #83
Comments
Hi! All LaMa models in the paper were trained using 256x256 crops from Places. Original resolution of images in Places is approximately 512 |
Feel free to reopen the issue if you have further questions |
Hi, @windj007. When using Places for training, why doesn't lama scale the image to 256 before cropping, is it more meaningful than directly making 256x256 crops ? |
Due to the nature of convolutions, the networks adapt to the scale of objects/textures - and perform best on exactly that scale. Inpainting in 256 is not very interesting in practice - so why optimize methods for such low resolution? Original resolution of images in Places is 512 - so we decided to keep that scale (i.e. average size of objects in pixels), but training directly in 512 is very expensive - so we used crops. |
Hi,
Firstly, Thank you for making such a great project open source.
I found the out_size in released big-lama config.yaml is 256, was the big-lama model trained with images' size 256?
The text was updated successfully, but these errors were encountered: