Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About implementation details #1

Closed
YangJae96 opened this issue Aug 1, 2022 · 2 comments
Closed

About implementation details #1

YangJae96 opened this issue Aug 1, 2022 · 2 comments

Comments

@YangJae96
Copy link

Hi. Thank you for your great work.

I was wondering why is the LUT_SIZE=6015
when the target domain is set to CUHK-SYSU?

Also the CUHK-SYSU(11,206) has more scene images than PRW(5,704).
If the batch size is set to 4 (2 for CUHK and 2 for PRW),
in each epoch are there some images in CUHK-SYSU that are not fed into the model for training?

@caposerenity
Copy link
Owner

Hi, @YangJae96 , thank you for your concern. I'm sorry that the code is still not well cleared up due to some recent ddls, but made open source under the requests of ECCV rules, and that may cause some confusion.

The LUT_SIZE in config file is not used in our model, as can be seen in line69,95 of train_da_dy_cluster.py, memory is not inited with LUT_SIZE parameter, you can delete it from config files.

In line 87 of engine.py, it can be observed that some images of CUHK-SYSU are not fed for training for each epoch, an alternative implementation is setting a fixed number of iterations for each epoch (like the implementation by SPCL), I tried this strategy in the earlier stage of my experiments, and observe very close performance.

@YangJae96
Copy link
Author

Oh I see. Thank you for detail explanation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants