Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it better to set the same crop size for both pretraining and downstream finetuning? #19

Closed
DY-ATL opened this issue Jan 31, 2024 · 5 comments

Comments

@DY-ATL
Copy link

DY-ATL commented Jan 31, 2024

Hello, I notice that the setting of habitat image generation is

256x256 resolution images, with 60 degrees field of view

When compared to the image used for downstream finetuning, there are two differences:

  1. The focal length fx is quite small (fx = (256 - 1) / 2 / tan(radians(60 / 2)) = 220.84), which is much smaller than SceneFlow's fx=1050.0.
  2. The crop size is much smaller than stereo's [352, 704].

I wonder will it better to increase the crop size to match the downstream? Or it doesn't matter due to the relative positional embedding?

@PhilippeWeinzaepfel
Copy link

Hi,

For pre-training, we indeed use 256x256 images (both for habitat and real image pairs) from which we extract 224x224 crops.

What we find the most important for downstream tasks is to both train and test at the same resolution, even if different from pre-training. This is why we use a tiling-based approach for stereo/flow at test time. While relative positional embedding helps, it is not enough to generalize to any resolution at test time.

Overall, specially once real image pairs are included, the pre-training should be effective irrespectively of the focals or resolution of the downstream tasks. Pre-training at higher resolution is likely to be better but it would be slow( DINOv2 actually did pre-train first at 224x224 before doing a second step at larger resolution, and a similar strategy could be used there if needed).

Best
Philippe

@DY-ATL
Copy link
Author

DY-ATL commented Feb 29, 2024

Thank you for your answer!

@DY-ATL DY-ATL closed this as completed Feb 29, 2024
@DY-ATL
Copy link
Author

DY-ATL commented Jun 24, 2024

What we find the most important for downstream tasks is to both train and test at the same resolution

Is it possible to use the training scheme in DUST3R "We randomly select the image aspect ratios for each batch (e.g. 16/9, 4/3, etc), so that at test time our network is familiar with different image shapes", so as to avoid using tiling-based inference? The tiling-based doesn't work well when the image has large textureless area, because the context information cannot be propagated from textured area to textureless area if they're in different tiles.

@DY-ATL DY-ATL reopened this Jun 24, 2024
@PhilippeWeinzaepfel
Copy link

In my opinion, it should work yes (but we don't plan to launch such experiments on our side).

@DY-ATL
Copy link
Author

DY-ATL commented Jun 26, 2024

In my opinion, it should work yes (but we don't plan to launch such experiments on our side).

I see. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants