You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
but they are essentially just images being passed through ResNets?
For this part where from my understanding is the real place augmentations happen:
# Transform
# Apply Affine Transformation again for hard augmentation
if self.cfg.UNSUP_TRANSFORM:
with torch.no_grad():
theta = self.get_batch_affine_transform(batch_size)
grid = F.affine_grid(theta, sup_x.size()).float()
unsup_x_trans = F.grid_sample(unsup_x_trans, grid)
unsup_x_trans_2 = F.grid_sample(unsup_x_trans_2, grid)
ht_grid = F.affine_grid(theta, unsup_ht1.size()).float()
unsup_ht_trans1 = F.grid_sample(unsup_ht1.detach(), ht_grid)
unsup_ht_trans2 = F.grid_sample(unsup_ht2.detach(), ht_grid)
These augmentations seem to share the same set of parameters, which means the augmentation should be on the same level, instead of having a difference in the magnitude. Would you please clarify these parts?
Thanks a lot in advance for your time.
The text was updated successfully, but these errors were encountered:
Hi, Kaihong. Sorry for my late reply. In fact this part is a simple implementation of easy-hard augmentation, I will explain it in detail below.
(1) "For this part where from my understanding is the real place augmentations happen: ..."
Yes, we used the image that augmented twice as the "hard" augmentation in this part.
In fact, the raw image has been augmented once (i.e. unsup_x) in preprocess and it is used as "easy" augmentation.
Then it is augmented again (unsup_x_trans), so the range of rotation angle and scaling is larger and it can be thinked as hard augmentation.
This implementation is more simple, because we can get the difference grid (ht_grid) between "easy" and "hard" augmentation easily, to obtain the target heatmap(unsup_ht_trans1).
(2) We also experimented that use two augmentation with different parameters in preprocess (Not ready in this repo due to lack of time) , the result is similar to the current implementation.
Hope my answer can help you. Feel free to ask me if you have more questions. 😋
Hi,
Thanks for sharing this exciting work! I have some practical questions regarding the implementation of the augmentation in your method:
In the pos_dual.py file, the augmentations were annotated here:
and
but they are essentially just images being passed through ResNets?
For this part where from my understanding is the real place augmentations happen:
These augmentations seem to share the same set of parameters, which means the augmentation should be on the same level, instead of having a difference in the magnitude. Would you please clarify these parts?
Thanks a lot in advance for your time.
The text was updated successfully, but these errors were encountered: