Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large differences in experimental results when BATCH_SIZE = 16 and EPOCH=500 #9

Open
Xiyan-Xu opened this issue Oct 30, 2023 · 14 comments

Comments

@Xiyan-Xu
Copy link

Thanks for sharing your great work!
I have trained the model myself with respect to your readme guideline, but set BATCH_SIZE = 16 and EPOCH=500 due to the lack of computing resources. In this setting, my trained model has much worse performance compared with the evaluation results presented in the paper. I am wondering if it is essential to have exact same training setting to make the model have similar performance to paper's model. Besides, could you kindly release the checkpoint that exclusively trained on the training set? I think that would be really helpful for me!
Thanks for your time and patience!

@tr3e
Copy link
Owner

tr3e commented Nov 2, 2023

Sorry for that, there are some typos in evaluator.py. We have already fixed that.
please make sure your code is up to date.

@Xiyan-Xu
Copy link
Author

Xiyan-Xu commented Nov 2, 2023

Thanks for reply. I am sure my code is up to date.
Can you release the checkpoint that exclusively trained on the training set? That would be really helpful.

@pabloruizponce
Copy link

I have trained for 1500 epochs with a batch size of 16 and I have a 12.9409 in FID compared to the 5.9 reported in the paper. Is there any reason for such a difference? All the rest of the parameters in the configs files were the ones used in the training of the model reported in the paper?

Thanks :)

@tr3e
Copy link
Owner

tr3e commented Dec 5, 2023

I am figuring it out. I will contact you as soon as possible.

@pabloruizponce
Copy link

@tr3e Any news on the issue? I have trained a model with same configuration as the one in your repo (except the batch size)

GENERAL:
  EXP_NAME: IG-S-8
  CHECKPOINT: ./checkpoints
  LOG_DIR: ./log

TRAIN:
  LR: 1e-4
  WEIGHT_DECAY: 0.00002
  BATCH_SIZE: 16
  EPOCH: 2000
  STEP: 1000000
  LOG_STEPS: 10
  SAVE_STEPS: 20000
  SAVE_EPOCH: 100
  RESUME: #checkpoints/IG-S/8/model/epoch=99-step=17600.ckpt
  NUM_WORKERS: 2
  MODE: finetune
  LAST_EPOCH: 0
  LAST_ITER: 0

But these are my results using your evaluation script:

========== MM Distance Summary ==========
---> [ground truth] Mean: 3.7844 CInterval: 0.0012
---> [InterGen] Mean: 3.8818 CInterval: 0.0017
========== R_precision Summary ==========
---> [ground truth](top 1) Mean: 0.4306 CInt: 0.0070;(top 2) Mean: 0.6110 CInt: 0.0086;(top 3) Mean: 0.7092 CInt: 0.0060;
---> [InterGen](top 1) Mean: 0.2517 CInt: 0.0071;(top 2) Mean: 0.3818 CInt: 0.0048;(top 3) Mean: 0.4662 CInt: 0.0046;
========== FID Summary ==========
---> [ground truth] Mean: 0.2966 CInterval: 0.0085
---> [InterGen] Mean: 10.7803 CInterval: 0.1791
========== Diversity Summary ==========
---> [ground truth] Mean: 7.7673 CInterval: 0.0440
---> [InterGen] Mean: 7.8075 CInterval: 0.0274
========== MultiModality Summary ==========
---> [InterGen] Mean: 1.5340 CInterval: 0.0615

As you can observe, the results are very distant from the ones provided in the paper. I am in an ongoing research using your dataset, but in order to make a fair comparison, we need to be able to replicate your results.

Hope you find what's going on :)

@tr3e
Copy link
Owner

tr3e commented Dec 23, 2023

Hello!
I have run the newest training code exactly in this repo with a batch size of 64 (32 for each of 2 GPUs) for 1500 epochs.
The results are like this:

========== MM Distance Summary ==========
---> [ground truth] Mean: 3.7847 CInterval: 0.0007
---> [InterGen] Mean: 4.1817 CInterval: 0.0009
========== R_precision Summary ==========
---> [ground truth](top 1) Mean: 0.4248 CInt: 0.0046;(top 2) Mean: 0.6036 CInt: 0.0044;(top 3) Mean: 0.7026 CInt: 0.0047;
---> [InterGen](top 1) Mean: 0.3785 CInt: 0.0052;(top 2) Mean: 0.5163 CInt: 0.0040;(top 3) Mean: 0.6350 CInt: 0.0032;
========== FID Summary ==========
---> [ground truth] Mean: 0.2981 CInterval: 0.0057
---> [InterGen] Mean: 5.8447 CInterval: 0.0735
========== Diversity Summary ==========
---> [ground truth] Mean: 7.7516 CInterval: 0.0163
---> [InterGen] Mean: 7.8750 CInterval: 0.0324
========== MultiModality Summary ==========
---> [InterGen] Mean: 1.5634 CInterval: 0.0334

We suggest that you can update to the newest code, and kindly increase the batch size.

@pabloruizponce
Copy link

@tr3e I am still unable to replicate the results. Can you provide me with some contact method to talk with you and not fill this issue?

@Xiyan-Xu
Copy link
Author

Xiyan-Xu commented Jan 8, 2024

@tr3e I am still unable to replicate the results. Can you provide me with some contact method to talk with you and not fill this issue?

me too.

@tr3e
Copy link
Owner

tr3e commented Jan 9, 2024

my email is lianghan@shanghaitech.edu.cn :)

@szqwu
Copy link

szqwu commented May 6, 2024

Hello! I have run the newest training code exactly in this repo with a batch size of 64 (32 for each of 2 GPUs) for 1500 epochs. The results are like this:

========== MM Distance Summary ========== ---> [ground truth] Mean: 3.7847 CInterval: 0.0007 ---> [InterGen] Mean: 4.1817 CInterval: 0.0009 ========== R_precision Summary ========== ---> [ground truth](top 1) Mean: 0.4248 CInt: 0.0046;(top 2) Mean: 0.6036 CInt: 0.0044;(top 3) Mean: 0.7026 CInt: 0.0047; ---> [InterGen](top 1) Mean: 0.3785 CInt: 0.0052;(top 2) Mean: 0.5163 CInt: 0.0040;(top 3) Mean: 0.6350 CInt: 0.0032; ========== FID Summary ========== ---> [ground truth] Mean: 0.2981 CInterval: 0.0057 ---> [InterGen] Mean: 5.8447 CInterval: 0.0735 ========== Diversity Summary ========== ---> [ground truth] Mean: 7.7516 CInterval: 0.0163 ---> [InterGen] Mean: 7.8750 CInterval: 0.0324 ========== MultiModality Summary ========== ---> [InterGen] Mean: 1.5634 CInterval: 0.0334

We suggest that you can update to the newest code, and kindly increase the batch size.

Hi, I found that the MMDist here is lower than what is presented in the paper. When I am reproducing your work as well as my model, this MMDist is always around 4. Is there any mistake in the calculation?

@RunqiWang77
Copy link

屏幕截图 2024-06-21 110045
The R_precision of InterGen that I reproduced is always higher than that of GT. Does anyone know the reason for this? Thank you very much.

@nancy-ux
Copy link

hi, guys! Wondering how you guys train the model. I have trained for 2000 epochs with a batch size of 16 on 4 GPUs. But I got a bad result. Could you give me some advice?
image

@blue-blue272
Copy link

hi, guys! Wondering how you guys train the model. I have trained for 2000 epochs with a batch size of 16 on 4 GPUs. But I got a bad result. Could you give me some advice? image

I got similar results. Have you resolved this problem?

@nancy-ux
Copy link

hi, guys! Wondering how you guys train the model. I have trained for 2000 epochs with a batch size of 16 on 4 GPUs. But I got a bad result. Could you give me some advice? image

I got similar results. Have you resolved this problem?

Maybe I changed the code unconsciously. I didn't find the bug. But I reload the code and try to train it. And then everything is okay.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants