Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train/val/test batch #3

Open
weizequan opened this issue Dec 15, 2017 · 6 comments
Open

train/val/test batch #3

weizequan opened this issue Dec 15, 2017 · 6 comments
Assignees

Comments

@weizequan
Copy link

Reading the class train(self, nb_train_batch, nb_test_batch,
nb_validation_batch, validation_frequency = 10, show_filters = False):
, your shared code now is used to train on patches (100x100), and also val/test on patches. Here, you set crop = False, if one want to train on full-size image, need to change "crop" to True.
I don't understand why nb_test_batch = 80? In your paper (), TABLE 1: the Ntest = 2000. Therefore, i think that nb_test_batch = 40 is right, the reason is that nb_test_batch (i.e., 40) * batch_size (i.e., 50) = Ntest (i.e., 2000).

@NicoRahm
Copy link
Owner

Thank you for your interest in this work. The reason nb_test_batch is different from the value of the article is that we used this code for further work which involved using a larger database for training/validation/testing on patches (you can also see that the architecture of the network is a bit different from the one described in the article).
Also, you are right if you want to train on full-size images you should set crop = True, although I advise that you crop the full-size images first to get a patch database on which you can train the model. This would fasten the training as you do not need to load the full image in order to crop it at each iteration.

@weizequan
Copy link
Author

OK, Thanks for your reply. Recently, i need to compare my result with your result use the best framework, i.e., TABLE II, Stats-2L. Because i am very unfamiliar with TensorFlow, so would you mind sending me the model.py, which is consistent with the article (WIFS2017, Distinguishing Computer Graphics from Natural Images Using Convolution Neural Networks)? My email: qweizework@gmail.com
Or, telling me what is the difference of these two networks in details, and i change it (model.py in your github) by myself for fair comparison. Thank you so much!

@VisualMediaSec
Copy link

Thanks for your work. Presently, I am researching some related topic, and I need the same dataset. I wonder if you can share me your dataset which would give me a great of help in making some comparisons with mine.

Btw, I have alreadly download CG images for the link you have share on github. But, I also need the 1800 Photographic images. Can you share it too?

Sorry to bother you. I will be appreciated for your reply as soon as possible, thank you very much!

@NicoRahm
Copy link
Owner

Hi,
I just committed the list of the photographic images we used for training, testing and validation. You can just download the complete RAISE dataset (http://mmlab.science.unitn.it/RAISE/download.html), extract the corresponding images from it and convert them to JPEG (for our first model, we used a 95% compression rate although during our latest experiments the models were tested with a rate of 70% to fit the rate of the CG images...).
Sorry to share the data in such an inconvenient way, I can't upload the whole dataset as it is too large (10 Go)...
I hope this helped you in your research. Do not hesitate if you have further questions.

@VisualMediaSec
Copy link

VisualMediaSec commented Jan 11, 2018

Thanks a lot. I also want to know how to convert TIFF or RAW images to JPEG(with 95% compression rate). Could you share your code or bash command too?

@NicoRahm
Copy link
Owner

NicoRahm commented Jan 18, 2018

Hi,
I am sorry for the delay.
We used DCRAW to read the RAW files (https://www.cybercom.net/~dcoffin/dcraw/) and cjpeg for the conversion to jpeg.
Basically, you can use the procedure provided on this blog post: http://www.mutaku.com/wp/index.php/2011/02/cook-your-raw-photos-into-jpeg-with-linux/ (which I recall we did although I have lost the exact bash command)
Just change the extension of the files by .NEF and the quality factor to 95 and that should reproduce our dataset.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants