Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I can't reproduce the results for Inference time mitigation #8

Closed
liuxiao-guan opened this issue Apr 8, 2024 · 3 comments
Closed

Comments

@liuxiao-guan
Copy link

I have tried to reproduce the results for Inference time mitigation. But I found that compared to the experimental results in the paper, the reproduced FID values are high and the similarity scores are low. Where could the problem lie?
To reproduce the results, I performed the following operations.

  1. I first generated the pictures using the diff_inference.py
    python diff_inference.py -nb 4000 --dataset laion --capstyle instancelevel_blip --rand_augs rand_numb_add
  2. Then, I evaluated the pictures
    python diff_retrieval.py --arch resnet50_disc --similarity_metric dotproduct \ --pt_style sscd --dist-url 'tcp://localhost:10001' --world-size 1 --rank 0 \ --query_dir /root/autodl-tmp/logs/Projects/DCR/inferences/defaultsd/laion/instancelevel_blip_auginfer_rand_numb_add_2/ --val_dir /root/autodl-tmp/laion_10k/train/

For the two python file, I only changed the path of the file,including savepath,checkpath(Stable diffusion 2.1) and prompt_json. And the dataset I used is laion10k which was given in your readme.
But the results were unexpected.
image
image
Fid=21.833.
The experiment in the paper is as shown below:
image
image
As you can see, the sim95_pc is 0.27, which is much different from the 0.556 in the experiment. The same is true for FID values.
Although this experiment was only run once, I feel that the results of one time should not be so different from the experiment.
Can you give me some advice on what the problem might be? Thanks.

@somepago
Copy link
Owner

The mitigation numbers are reported for a fine-tuned model (with ddf=5) on top of the SD-2 and laion-10k dataset.

Did you train that model and are inferencing on it?

@liuxiao-guan
Copy link
Author

liuxiao-guan commented Apr 15, 2024

Oh,yeah, that is the problem, I just used the SD-2 to reproduce the result. Thanks! By the way, can the code also be run using accelerate? I saw that accelerate was used in the code

@somepago
Copy link
Owner

Thanks. Accelerate should work. Lmk if it doesn't.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants