Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Real Class PNG Format Image Request #11

Open
Rapisurazurite opened this issue Sep 4, 2023 · 23 comments
Open

Real Class PNG Format Image Request #11

Rapisurazurite opened this issue Sep 4, 2023 · 23 comments

Comments

@Rapisurazurite
Copy link

I've observed that the images in the "real" folder are in JPG format, while the generated images are in PNG format. I wrote the following code to convert the images in the "adm" folder into JPG:

import os

import cv2
import glob
from pathos.multiprocessing import ProcessingPool as Pool
from tqdm import tqdm

png_images = glob.glob("./*.png")
png_images.sort()

def png2jpg(png_image):
    img = cv2.imread(png_image)
    os.remove(png_image)
    cv2.imwrite(png_image[:-4] + ".jpg", img)


with Pool(8) as p:
    list(tqdm(p.imap(png2jpg, png_images), total=len(png_images)))

and after that i ran the test.py, and the results of lsun_bedroom/lsun_bedroom using lsun_adm.pth:

ACC: 0.49500
AP: 0.50112
R_ACC: 0.79800
F_ACC: 0.19200

Before converting the images, the results were as follows (as seen in issue #9)

ACC: 0.89900                                                                                                                                                                                                                                                                              
AP: 0.99890                                                                                                                                                                                                                                                                               
R_ACC: 0.79800                                                                                                                                                                                                                                                                            
F_ACC: 1.00000

Can you provide real class PNG dire image? Thanks.

@eecoder-dyf
Copy link

You can try to set the jpeg image QF as 100 in cv2.imwrite, the quality of dire images strongly affect the classification results indeed.

@Rapisurazurite
Copy link
Author

You can try to set the jpeg image QF as 100 in cv2.imwrite, the quality of dire images strongly affect the classification results indeed.

Yes, when set the IMWRITE_JPEG_QUALITY to 100, the result is same as in PNG. However, the real dire images are saved using the following code snippet:

cv2.imwrite(f"{dire_save_dir}/{fn_save}", cv2.cvtColor(dire[i].cpu().numpy().astype(np.uint8), cv2.COLOR_RGB2BGR))

By default, this code uses a JPEG compression rate of 95. It would be more appropriate to use the same compression ratio for both the real image and the adm image.

@RichardSunnyMeng
Copy link

I set QF as 100 and get the results:

ACC: 0.63300
AP: 0.98968
R_ACC: 1.00000
F_ACC: 0.26600

@Rapisurazurite
Copy link
Author

I set QF as 100 and get the results:

ACC: 0.63300
AP: 0.98968
R_ACC: 1.00000
F_ACC: 0.26600

That is strange, my result after set QF to 100:

ACC: 0.89850
AP: 0.99661
R_ACC: 0.79800
F_ACC: 0.99900

@RichardSunnyMeng
Copy link

Actually as the paper stated, the model should be robust to jpeg compression.

@Rapisurazurite
Copy link
Author

By the way, I have generated the PNG format dire image of real using compute_dire.sh, and re-run the test script, the result is:

ACC: 0.50950
AP: 0.65876
R_ACC: 0.01900
F_ACC: 1.00000

@jS5t3r
Copy link

jS5t3r commented Oct 24, 2023

@Rapisurazurite From which folder did u take the images? (dire, images, recons)
Are all real images in .jpg?

Is it legal to convert .jpg directly to .png?

@Rapisurazurite
Copy link
Author

@Rapisurazurite From which folder did u take the images? (dire, images, recons) Are all real images in .jpg?

Is it legal to convert .jpg directly to .png?

I use compute_dire.py to generate .png dire image

@Rapisurazurite
Copy link
Author

@Rapisurazurite From which folder did u take the images? (dire, images, recons) Are all real images in .jpg?
Is it legal to convert .jpg directly to .png?

I use compute_dire.py to generate .png dire image

for the real dataset

@jS5t3r
Copy link

jS5t3r commented Oct 24, 2023

@Rapisurazurite From which folder did u take the images? (dire, images, recons) Are all real images in .jpg?
Is it legal to convert .jpg directly to .png?

I use compute_dire.py to generate .png dire image

for the real dataset

Does this paper still hold? I mean this a fundamental mistake then...

@jS5t3r
Copy link

jS5t3r commented Oct 26, 2023

@Rapisurazurite From which folder did u take the images? (dire, images, recons) Are all real images in .jpg?
Is it legal to convert .jpg directly to .png?

I use compute_dire.py to generate .png dire image

for the real dataset

I assume, that you converted jpeg (compressed) to png (lossless).
This does not really make sense. You have to generate the png directly.

@lukovnikov
Copy link

lukovnikov commented Oct 27, 2023

@Rapisurazurite From which folder did u take the images? (dire, images, recons) Are all real images in .jpg?
Is it legal to convert .jpg directly to .png?

I use compute_dire.py to generate .png dire image

for the real dataset

I assume, that you converted jpeg (compressed) to png (lossless). This does not really make sense. You have to generate the png directly.

I think he used 'compute_dire.py' to generate the DIRE reconstruction but saved them as .png this time whereas originally the reals were just .jpeg. I'm not sure this says a lot since the difference of a .jpeg with a lossless DIRE reconstruction might still contain .jpeg artifacts (introduced by subtracting the original JPEG image).

@lukovnikov
Copy link

lukovnikov commented Oct 30, 2023

I also reproduced the OP's findings.
My steps:

  1. from the OneDrive location dire/test/lsun_bedroom/lsun_bedroom, I downloaded the files adm.tar.gz and real.tar.gz, and unpacked them to lsun_test/1_fake and lsun_test/0_real, respectively. Each set contains 1000 images. The real ones are in JPEG and the synthetic ones in PNG.
  2. then, I downloaded the model checkpoints/lsun_adm.pth from OneDrive
  3. then, I ran test.py pointing to the downloaded model and the lsun_test folder
  4. and got the following output:
ACC: 1.00000
AP: 1.00000
R_ACC: 1.00000
F_ACC: 1.00000

(4b. as a sanity check, then I swapped the real and fake image folders and got the following output:

ACC: 0.00000
AP: 0.30710
R_ACC: 0.00000
F_ACC: 0.00000

)
5. then I converted the images in the 1_fake directory to JPEG with quality 95 using PIL (after verifying the real images had quality 95 using imagemagick) and got the following results:

ACC: 0.50100
AP: 0.66639
R_ACC: 1.00000
F_ACC: 0.00200
  1. Interestingly, saving 1_fake as JPEG with quality 100 using PIL (subsampling=0), gives again 100% accuracy.

=====================================

I obtained similar catastrophic loss of performance after Jpeg encoding when using imagenet_adm.pth with ImageNet test data (not that here we must use JPEG with quality level 75 since the original DIRE's jpegs for the real images were JPEG 75), as well as with SD1 data for lsun.

imagenet_adm_jpeg75:
ACC: 0.50040
AP: 0.66823
R_ACC: 1.00000
F_ACC: 0.00080

lsun_sd1_jpeg95:
ACC: 0.50350
AP: 0.92018
R_ACC: 1.00000
F_ACC: 0.00700

=====================================

Finally, I also did the following experiment with only synthetic images:

  1. take the PNG images from dire/test/lsun_bedroom/lsun_bedroom/adm.tar.gz and split the 1000 images into 500/500 images in 1_fake/0_real subfolders.
  2. run lsun_adm.pth on this, which gives me the following results:
lsun_adm_allfake:
ACC: 0.50000
AP: 0.49954
R_ACC: 0.00000
F_ACC: 1.00000
  1. make a copy and convert the images from 0_real to JPEG with quality 95 and rerun lsun_adm.pth. Now we have the following results (on exactly the same data):
lsun_adm_allfake_jpeg95:
ACC: 0.99900
AP: 1.00000
R_ACC: 0.99800
F_ACC: 1.00000

@ciodar
Copy link

ciodar commented Nov 2, 2023

@lukovnikov Many thanks for your analysis. I have reproduced several of your points and I agree with your results, however

) 5. then I converted the images in the 1_fake directory to JPEG with quality 95 using PIL (after verifying the real images had quality 95 using imagemagick) and got the following results:

ACC: 0.50100
AP: 0.66639
R_ACC: 1.00000
F_ACC: 0.00200

The JPEG quality of LSUN should be 75, as also stated in https://github.com/fyu/lsun :

All the images in one category are stored in one lmdb database file. The value of each entry is the jpg binary data. We resize all the images so that the smaller dimension is 256 and compress the images in jpeg with quality 75.

How did you get quality 95? I ran magick identify -verbose jpeg_image.jpg and I obtain Quality 75 for all images in lsun_bedroom.

@lukovnikov
Copy link

lukovnikov commented Nov 2, 2023

The JPEG quality of LSUN should be 75, as also stated in https://github.com/fyu/lsun :

All the images in one category are stored in one lmdb database file. The value of each entry is the jpg binary data. We resize all the images so that the smaller dimension is 256 and compress the images in jpeg with quality 75.

How did you get quality 95? I ran magick identify -verbose jpeg_image.jpg and I obtain Quality 75 for all images in lsun_bedroom.

@ciodar Thanks for letting me know!

I simply downloaded the reconstructions (not the original lsun images) from the OneDrive provided by the authors here and ran the following command: identify -format '%Q' image.jpg that I found on StackOverflow on several real images. Doing identify -verbose image.jpg | grep Quality also gives me "Quality: 95".

I'm not sure what is the source of this discrepancy but would be curious to find out. I was thinking maybe somehow that's the default JPEG quality in the library but then ImageNet reconstructions should also be saved in quality 95, but there the same command gives me quality 75 (and the files are also smaller while being at the same resolution).

@ciodar
Copy link

ciodar commented Nov 2, 2023

The JPEG quality of LSUN should be 75, as also stated in https://github.com/fyu/lsun :

All the images in one category are stored in one lmdb database file. The value of each entry is the jpg binary data. We resize all the images so that the smaller dimension is 256 and compress the images in jpeg with quality 75.

How did you get quality 95? I ran magick identify -verbose jpeg_image.jpg and I obtain Quality 75 for all images in lsun_bedroom.

@ciodar Thanks for letting me know!

I simply downloaded the reconstructions (not the original lsun images) from the OneDrive provided by the authors here and ran the following command: identify -format '%Q' image.jpg that I found on StackOverflow on several real images. Doing identify -verbose image.jpg | grep Quality also gives me "Quality: 95".

I'm not sure what is the source of this discrepancy but would be curious to find out. I was thinking maybe somehow that's the default JPEG quality in the library but then ImageNet reconstructions should also be saved in quality 95, but there the same command gives me quality 75 (and the files are also smaller while being at the same resolution).

That is strange, since the procedure is the same I followed. I did not download lsun directly, but I assume the authors did when preparing the dataset, so the JPEG quality should be the same as the one stated by LSUN dataset creators.

However, I'm now noticing that with quality as low as 75, the reconstructions have some colour blobs (which are not present in real images). I don't know if it is a bug in my conversion and inversion process or if the network struggles to invert heavily compressed images. I will further investigate about this.

@lukovnikov
Copy link

lukovnikov commented Nov 2, 2023

That is strange, since the procedure is the same I followed. I did not download lsun directly, but I assume the authors did when preparing the dataset, so the JPEG quality should be the same as the one stated by LSUN dataset creators.

However, I'm now noticing that with quality as low as 75, the reconstructions have some colour blobs (which are not present in real images). I don't know if it is a bug in my conversion and inversion process or if the network struggles to invert heavily compressed images. I will further investigate about this.

It is quite strange, perhaps the LSUN reconstructions on OneDrive were created using an earlier version of the code? Did you check the data from dire/test/lsun_bedroom/lsun_bedroom?

In any case, I also tried to compress both the reals and the fakes from LSUN test reconstructions down to JPEG with quality 75, leading to bad results (Acc=50%, AP~75%), and encoding just the fake reconstructions to JPEG with quality 75 while keeping the reconstruction jpegs for reals as downloaded from OneDrive gives even worse results (Acc=50%, AP<50%). In both cases, everything is classified by lsun_adm.pth as "real" with extremely high confidence (even putting the accuracy threshold to 0.01 gives 50% accuracy).

@ciodar
Copy link

ciodar commented Nov 2, 2023

In any case, I also tried to compress both the reals and the fakes from LSUN test reconstructions down to JPEG with quality 75, leading to bad results (Acc=50%, AP~75%), and encoding just the fake reconstructions to JPEG with quality 75 while keeping the reconstruction jpegs for reals as downloaded from OneDrive gives even worse results (Acc=50%, AP<50%). In both cases, everything is classified by lsun_adm.pth as "real" with extremely high confidence (even putting the accuracy threshold to 0.01 gives 50% accuracy).

This result is actually good as it confirms your previous analysis. Achieving a value of exactly 50% should give a hint that the JPEG quality is the same between real and fake images, assuming that the model does not recognize anything but the JPEG artifacts. This is also supported by the fact that the model achieves a very early convergence with training (less than one epoch), to obtain perfect classification between real JPEG images and fake PNG images.

@ciodar
Copy link

ciodar commented Nov 27, 2023

I simply downloaded the reconstructions (not the original lsun images) from the OneDrive provided by the authors here and ran the following command: identify -format '%Q' image.jpg that I found on StackOverflow on several real images. Doing identify -verbose image.jpg | grep Quality also gives me "Quality: 95".

I gave a second read to this, and it is different than what I did. The estimated JPEG quality of the reconstructions can differ from the quality of the images. Moreover, the JPEG artifacts can be included even in the uncompressed PNG image generated by the model, since Diffusion Models can reproduce artifacts present in the training set (see Corvi et al. 2023).

A more realistic scenario is to encode the generated images into JPEG and recompute the DIRE for all the images, since this is what would happen during an inference step with this method. Interestingly, I followed this second procedure and the results do not differ so much from your findings. My intuition is that in both ways we are producing double (if not triple) compression artifacts, which could be spottable by the detector.

I'd like the author @ZhendongWang6 to comment on this, and to know which of these two alternatives he used to calculate the robustness metrics

@dong12003
Copy link

Hello, do you still have the dataset for this project? The dataset link is now broken. If you have it, could you please send me a copy? Thank you very much.

@Rapisurazurite
Copy link
Author

Hello, do you still have the dataset for this project? The dataset link is now broken. If you have it, could you please send me a copy? Thank you very much.

The dataset appears to still be available via the following link: https://rec.ustc.edu.cn/share/ec980150-4615-11ee-be0a-eb822f25e070.

By the way, for a more detailed analysis of PNG and JPEG format issues, please refer to Jonas Ricker’s research paper.

@Tranquil1ty
Copy link

Tranquil1ty commented Apr 14, 2024

The JPEG quality of LSUN should be 75, as also stated in https://github.com/fyu/lsun :

All the images in one category are stored in one lmdb database file. The value of each entry is the jpg binary data. We resize all the images so that the smaller dimension is 256 and compress the images in jpeg with quality 75.

How did you get quality 95? I ran magick identify -verbose jpeg_image.jpg and I obtain Quality 75 for all images in lsun_bedroom.

@ciodar Thanks for letting me know!
I simply downloaded the reconstructions (not the original lsun images) from the OneDrive provided by the authors here and ran the following command: identify -format '%Q' image.jpg that I found on StackOverflow on several real images. Doing identify -verbose image.jpg | grep Quality also gives me "Quality: 95".
I'm not sure what is the source of this discrepancy but would be curious to find out. I was thinking maybe somehow that's the default JPEG quality in the library but then ImageNet reconstructions should also be saved in quality 95, but there the same command gives me quality 75 (and the files are also smaller while being at the same resolution).

That is strange, since the procedure is the same I followed. I did not download lsun directly, but I assume the authors did when preparing the dataset, so the JPEG quality should be the same as the one stated by LSUN dataset creators.

However, I'm now noticing that with quality as low as 75, the reconstructions have some colour blobs (which are not present in real images). I don't know if it is a bug in my conversion and inversion process or if the network struggles to invert heavily compressed images. I will further investigate about this.

I am very curious about this result. Could you please attach some pictures?

@jS5t3r
Copy link

jS5t3r commented Apr 14, 2024

@Rapisurazurite From which folder did u take the images? (dire, images, recons) Are all real images in .jpg?
Is it legal to convert .jpg directly to .png?

I use compute_dire.py to generate .png dire image

for the real dataset

I assume, that you converted jpeg (compressed) to png (lossless). This does not really make sense. You have to generate the png directly.

I think he used 'compute_dire.py' to generate the DIRE reconstruction but saved them as .png this time whereas originally the reals were just .jpeg. I'm not sure this says a lot since the difference of a .jpeg with a lossless DIRE reconstruction might still contain .jpeg artifacts (introduced by subtracting the original JPEG image).

Here is a paper about jpeg artifacts: https://arxiv.org/pdf/2403.17608.pdf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants