Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the results of fbcnn_color.pth #3

Closed
YangGangZhiQi opened this issue Sep 27, 2021 · 6 comments
Closed

About the results of fbcnn_color.pth #3

YangGangZhiQi opened this issue Sep 27, 2021 · 6 comments
Labels
question Further information is requested

Comments

@YangGangZhiQi
Copy link

YangGangZhiQi commented Sep 27, 2021

Hi,
Great works! I test your model(fbcnn_color.pth) in 'testset/Real' dataset. The results are not so remarkable as the picture in this master. The output of model(fbcnn_color.pth) without qf_input are as follow(left is input, right is output):

merge_1
Uploading merge_2.jpg…
merge_3

merge_4
merge_5
merge_6
I don't kown if there are something wrong with my results. And the output of model(fbcnn_color.pth) with qf_input are also not so good. When zoom out, I can find obvious artifacts. Hope for your reply.

@jiaxi-jiang
Copy link
Owner

jiaxi-jiang commented Sep 27, 2021

Hi, thanks for your interest!

The provided color model is trained with the single JPEG degradation model. As we analyze in the paper, such blind models usually can not deal well with real JPEG images which are compressed multiple times. But our FBCNN is a flexible model, to get the desirable results with fewer artifacts, just set qf_input as a smaller number, e.g. 10. See https://github.com/jiaxi-jiang/FBCNN/blob/main/main_test_fbcnn_color_real.py#L23

BTW, to get the desired result automatically, you can either estimate the dominant smaller quality factor using the method proposed in our paper (FBCNN-D), or augment the training data with our proposed double JPEG degradation model (FBCNN-A).

@jiaxi-jiang
Copy link
Owner

jiaxi-jiang commented Sep 28, 2021

The pictures I show are exactly the same as 'test/Real'. Just directly run main_test_fbcnn_color_real.py then you can get the result. Please note that the quality factor [0,100] is transformed to [1,0] as the input of the network. So quality factor 10 corresponds to 0.9 as the input parameter.
https://github.com/jiaxi-jiang/FBCNN/blob/main/main_test_fbcnn_color_real.py#L87
Let me know if you still can not get the same result.

@jiaxi-jiang
Copy link
Owner

I don't know your aim of coming here. The images I show in this repo is super easy to get from the provided codes, pre-trained model and test images. Malicious comments are not welcome.

Fig.1 with input qf_control = 10

1_qf_10

Fig. 2 with input qf_control = 5

2_qf_5

Fig. 3 with input qf_control = 10

3_qf_10

Fig. 4 with input qf_control =10

4_qf_10

Fig. 5 with input qf_control = 70

5_qf_70

Fig. 6 with input qf_control = 70

6_qf_70

log file

Real_fbcnn_color.log

@jiaxi-jiang jiaxi-jiang reopened this Sep 29, 2021
@jiaxi-jiang
Copy link
Owner

I just download my repo from a new machine and run the code, without changing anything. The models and test images are both exactly the same. The results you show with many artifacts seem that the qf_control is a large number e.g. 90. So I am interested to see your results with qf_control = 5 or 10 because in this case most artifacts together with some texture details should be removed.

Repository owner deleted a comment from YangGangZhiQi Sep 29, 2021
Repository owner deleted a comment from YangGangZhiQi Sep 29, 2021
Repository owner deleted a comment from YangGangZhiQi Sep 29, 2021
Repository owner deleted a comment from YangGangZhiQi Sep 29, 2021
@cszn
Copy link
Collaborator

cszn commented Sep 29, 2021

Just run the code main_test_fbcnn_color_real.py.

python main_test_fbcnn_color_real.py

@cszn cszn closed this as completed Sep 29, 2021
Repository owner deleted a comment from jiaxi-jiang Sep 29, 2021
@cszn cszn added the question Further information is requested label Sep 29, 2021
@YangGangZhiQi
Copy link
Author

@jiaxi-jiang @cszn When running the model with cpu mode, I can get the same results with yours. But with gpu mode, the output pictures of model with qf_control = [5,10, 30, 50, 70, 90] are same. I print the values of qf_embedding in FBCNN network with qf_control=[5,10,50, 70, 90], there are same too. It's really weird that the outputs become different with different device(cpu/gpu).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants