You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just checked the original TensorFlow implementation and found a part different from them. In the original implementation. There is a probability of applying and not applying the augmentation. But I did not find it in this repo.
Hi and thanks for this awesome repo.
I just checked the original TensorFlow implementation and found a part different from them. In the original implementation. There is a probability of applying and not applying the augmentation. But I did not find it in this repo.
The link for TensorFlow version: https://github.com/tensorflow/tpu/blob/5144289ba9c9e5b1e55cc118b69fe62dd868657c/models/official/efficientnet/autoaugment.py#L532
Original:
with tf.name_scope('randaug_layer_{}'.format(layer_num)):
for (i, op_name) in enumerate(available_ops):
prob = tf.random_uniform([], minval=0.2, maxval=0.8, dtype=tf.float32)
func, _, args = _parse_policy_info(op_name, prob, random_magnitude,
replace_value, augmentation_hparams)
this repo:
ops = random.choices(self.augment_list, k=self.n)
# print (ops)
for op, minval, maxval in ops:
val = (float(self.m) / 30) * float(maxval - minval) + minval
img = op(img, val)
May I ask is there any reason for this? Or is there any part I missing?
Thanks in advance
The text was updated successfully, but these errors were encountered: