Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Easy and hard augmentation? #1

Closed
wangkaihong opened this issue Dec 9, 2021 · 1 comment
Closed

Easy and hard augmentation? #1

wangkaihong opened this issue Dec 9, 2021 · 1 comment

Comments

@wangkaihong
Copy link

wangkaihong commented Dec 9, 2021

Hi,

Thanks for sharing this exciting work! I have some practical questions regarding the implementation of the augmentation in your method:

In the pos_dual.py file, the augmentations were annotated here:

        # Teacher
        # Easy Augmentation
        with torch.no_grad():
            unsup_fea1, unsup_ht1 = self.resnet(unsup_x)
            unsup_fea2, unsup_ht2 = self.resnet2(unsup_x)

and

        # Student
        # Hard Augmentation
        _, cons_ht1 = self.resnet(unsup_x_trans)
        _, cons_ht2 = self.resnet2(unsup_x_trans_2)

but they are essentially just images being passed through ResNets?

For this part where from my understanding is the real place augmentations happen:

        # Transform
        # Apply Affine Transformation again for hard augmentation
        if self.cfg.UNSUP_TRANSFORM:
            with torch.no_grad():
                theta = self.get_batch_affine_transform(batch_size)
                grid = F.affine_grid(theta, sup_x.size()).float()

                unsup_x_trans = F.grid_sample(unsup_x_trans, grid)
                unsup_x_trans_2 = F.grid_sample(unsup_x_trans_2, grid)

                ht_grid = F.affine_grid(theta, unsup_ht1.size()).float()
                unsup_ht_trans1 = F.grid_sample(unsup_ht1.detach(), ht_grid)
                unsup_ht_trans2 = F.grid_sample(unsup_ht2.detach(), ht_grid)

These augmentations seem to share the same set of parameters, which means the augmentation should be on the same level, instead of having a difference in the magnitude. Would you please clarify these parts?

Thanks a lot in advance for your time.

@xierc
Copy link
Owner

xierc commented Dec 18, 2021

Hi, Kaihong. Sorry for my late reply. In fact this part is a simple implementation of easy-hard augmentation, I will explain it in detail below.

(1) "For this part where from my understanding is the real place augmentations happen: ..."
Yes, we used the image that augmented twice as the "hard" augmentation in this part.

In fact, the raw image has been augmented once (i.e. unsup_x) in preprocess and it is used as "easy" augmentation.
Then it is augmented again (unsup_x_trans), so the range of rotation angle and scaling is larger and it can be thinked as hard augmentation.

This implementation is more simple, because we can get the difference grid (ht_grid) between "easy" and "hard" augmentation easily, to obtain the target heatmap(unsup_ht_trans1).

(2) We also experimented that use two augmentation with different parameters in preprocess (Not ready in this repo due to lack of time) , the result is similar to the current implementation.

Hope my answer can help you. Feel free to ask me if you have more questions. 😋

@xierc xierc closed this as completed Aug 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants