You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing LaMa! The inpainting quality is really impressive!
I was wondering:
Comparing "LaMa-Fourier" with "Big LaMa-Fourier": How much did the larger training data (4.5M images from the Places-Challenge dataset) contribute to the improved quality of Big LaMa-Fourier? Do you think that similar results could have also been achieved for Big LaMa-Fourier with less data?
You have proposed a sophisticated approach for data augmentation. How much did the training and the inference benefit from data augmentation using segmentation masks from Detectron2?
Best wishes,
Alex
The text was updated successfully, but these errors were encountered:
How much did the larger training data (4.5M images from the Places-Challenge dataset) contribute to the improved quality of Big LaMa-Fourier?
Larger data helps, but significantly less than larger model size and other training tricks (SegmPL, large masks)
Do you think that similar results could have also been achieved for Big LaMa-Fourier with less data?
Less data = less quality, but not dramatically - reducing model size or removing SegmPL or using smaller training masks would hurt more
How much did the training and the inference benefit from data augmentation using segmentation masks from Detectron2?
We do not use segmentation masks from Detectron for training. We tried it in the very beginning of the project, but faced technical issues (slow, gpu memory consumption, cuda-reinitialization limitation) - so we could not use segmentation-based mask generation effectively during training. It is there just because we forgot to remove it when preparing a public code release. Please note that segm_proba: 0 in all the data configs.
Hi,
Thank you for sharing LaMa! The inpainting quality is really impressive!
I was wondering:
Comparing "LaMa-Fourier" with "Big LaMa-Fourier": How much did the larger training data (4.5M images from the Places-Challenge dataset) contribute to the improved quality of Big LaMa-Fourier? Do you think that similar results could have also been achieved for Big LaMa-Fourier with less data?
You have proposed a sophisticated approach for data augmentation. How much did the training and the inference benefit from data augmentation using segmentation masks from Detectron2?
Best wishes,
Alex
The text was updated successfully, but these errors were encountered: