You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the paper you use various datasets (BSD500, Waterloo etc.) that have various sized images. For training 128x128 sized patches are used. Could you give information about the number of patches extracted (Like in DnCNN paper)? So basically I want to know the dimensions of the training dataset ?x128x128x3. Also did you use any data augmentation methods?
The text was updated successfully, but these errors were encountered:
I did not use any data augmentation methods. I just count the training data. The number of training data if about 10260 Batches. In our experiments, I set the batch size equal to 36, which means the total training data is 10260361281283
In the paper you use various datasets (BSD500, Waterloo etc.) that have various sized images. For training 128x128 sized patches are used. Could you give information about the number of patches extracted (Like in DnCNN paper)? So basically I want to know the dimensions of the training dataset ?x128x128x3. Also did you use any data augmentation methods?
The text was updated successfully, but these errors were encountered: