Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The prob of applying augmentation? #20

Open
ghost opened this issue Oct 3, 2020 · 1 comment
Open

The prob of applying augmentation? #20

ghost opened this issue Oct 3, 2020 · 1 comment

Comments

@ghost
Copy link

ghost commented Oct 3, 2020

Hi and thanks for this awesome repo.

I just checked the original TensorFlow implementation and found a part different from them. In the original implementation. There is a probability of applying and not applying the augmentation. But I did not find it in this repo.

The link for TensorFlow version: https://github.com/tensorflow/tpu/blob/5144289ba9c9e5b1e55cc118b69fe62dd868657c/models/official/efficientnet/autoaugment.py#L532

Original:
with tf.name_scope('randaug_layer_{}'.format(layer_num)):
for (i, op_name) in enumerate(available_ops):
prob = tf.random_uniform([], minval=0.2, maxval=0.8, dtype=tf.float32)
func, _, args = _parse_policy_info(op_name, prob, random_magnitude,
replace_value, augmentation_hparams)

this repo:
ops = random.choices(self.augment_list, k=self.n)
# print (ops)
for op, minval, maxval in ops:
val = (float(self.m) / 30) * float(maxval - minval) + minval
img = op(img, val)

May I ask is there any reason for this? Or is there any part I missing?

Thanks in advance

@JiyueWang
Copy link

In addition, the Identity operation is not included

MohammadJavadD added a commit to MohammadJavadD/pytorch-randaugment that referenced this issue Nov 25, 2020
Adding prob to address issue ildoonet#20: prob = ildoonet#20 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant