Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The usage of _dequeue_and_enqueue function in RealESRNetModel #44

Closed
YangGangZhiQi opened this issue Aug 19, 2021 · 7 comments
Closed

Comments

@YangGangZhiQi
Copy link

Hi,
I read the code serveral times, but cannot figure out what the role of _dequeue_and_enqueue function in RealESRNetModel is. This function is only used in feed_data(), which just put self.lq and self.gt into self.queue_lq and self.queue_gt. But I cannot find some other codes to use self.queue_lq and self.queue_gt. I would appreciate it if someone could explain this.

@xinntao
Copy link
Owner

xinntao commented Aug 23, 2021

self._dequeue_and_enqueue()

This line will call _dequeue_and_enqueue

@YangGangZhiQi
Copy link
Author

@xinntao Yes, this line call self._dequeue_and_enqueue(). I just cannot figure out how your code use self.queue_lq and self.queue_gt . I cannot find some other codes to use this two member variables。I know the role of this code block is to put input data in training pool. How does the code reach this goal?

@xinntao
Copy link
Owner

xinntao commented Aug 24, 2021

image

  1. It first randomly shuffle the image in the pool
  2. then get first N samples for self.lq and self.gt
  3. Finally, we obtain the shuffled self.lq and self.gt

@YangGangZhiQi
Copy link
Author

YangGangZhiQi commented Aug 25, 2021

I get it. Thanks for your patient explain! So poor my understanding about the _dequeue_and_enqueue function!

@Arseny-N
Copy link

Arseny-N commented Sep 1, 2021

Hi, @xinntao. Thank you for the awesome work !

I also somewhat do not understand the purpose of this function.
Why do we need to sample the training batch from a pool of images ?

As far as I can tell the most obvious reason is to avoid creating a batch
from sequential indices. But we already shuffle the indices in EnlargedSampler.
So the batch will not be sequential.

On the other hand the shuffling may result in some samples being used more
then once. Is this a technique for improving training ?

Thanks in advance,
Arseny

@xinntao
Copy link
Owner

xinntao commented Sep 1, 2021

@Arseny-N
Batch processing in pytorch limits the diversity of synthetic degradations in a batch. For example, samples in a batch could not have different resize scaling factors. Therefore, we employ a training pair pool to increase the degradation diversity in a batch.

@Arseny-N
Copy link

Arseny-N commented Sep 2, 2021

Thank you for the timely explanation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants