Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with using LaylerNorm in tensorlayer 2 #1082

Open
2 tasks
mrgreen3325 opened this issue May 15, 2020 · 3 comments
Open
2 tasks

Problem with using LaylerNorm in tensorlayer 2 #1082

mrgreen3325 opened this issue May 15, 2020 · 3 comments

Comments

@mrgreen3325
Copy link

mrgreen3325 commented May 15, 2020

New Issue Checklist

Issue Description

[INSERT DESCRIPTION OF THE PROBLEM]

Reproducible Code

  • Which OS are you using ?
  • Please provide a reproducible code of your issue. Without any reproducible code, you will probably not receive any help.

[INSERT CODE HERE]

# ======================================================== #
###### THIS CODE IS AN EXAMPLE, REPLACE WITH YOUR OWN ######
# ======================================================== #
nn = Conv2d(64, (3, 3), (1, 1), padding='SAME', W_init=w_init, b_init=None)(n)
nn = LayerNorm(act=tf.nn.relu)(nn)

# ======================================================== #
###### THIS CODE IS AN EXAMPLE, REPLACE WITH YOUR OWN ######
# ======================================================== #

Error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [1,1,1,8] vs. [1,1,1,64] [Op:Mul]

The original code is using the batchnorm2d with batch_size = 8.
And I wanna change it to see the different.
However, the program report with that error.
Is that I use it unproperly?
Thanks for help.

@mrgreen3325 mrgreen3325 changed the title Problem with using LayNorm in tensorlayer 2 Problem with using LaylerNorm in tensorlayer 2 May 15, 2020
@Laicheng0830
Copy link
Member

Laicheng0830 commented May 18, 2020

You may need to set the parameters begin_norm_axis=0
nn = Conv2d(64, (3, 3), (1, 1), padding='SAME', W_init=w_init, b_init=None)(n)
nn = LayerNorm(begin_norm_axis=-1, act=tf.nn.relu)(nn)

@mrgreen3325
Copy link
Author

You may need to set the parameters begin_norm_axis=-1
nn = Conv2d(64, (3, 3), (1, 1), padding='SAME', W_init=w_init, b_init=None)(n)
nn = LayerNorm(begin_norm_axis=-1, act=tf.nn.relu)(nn)

Thanks laicheng.
May I know what is begin_norm_axis this setting mean?
Infact, the input batch of image is [8, 48, 48, 3] (the batch_size =8 ).

@Laicheng0830
Copy link
Member

We wanted to compute mean and variance.
norm_axes = range(begin_norm_axis, len(inputs_shape)-1)
mean, var = tf.nn.moments(inputs, norm_axes, keepdims=True)
for so-called "global normalization", used with convolutional filters with shape [batch, height, width, depth], pass norm_axes=[0, 1, 2]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants