-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Keras] fix weight shape in dilated conv #1715
Conversation
not ready for merge, one issue to fix. |
ready to merge now. |
please rebase against the master. @kazum @PariksheetPinjari909 please review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a minor comment. Other parts look good to me.
nnvm/python/nnvm/frontend/keras.py
Outdated
alpha = keras_layer.alpha if hasattr(keras_layer, "alpha") else 1.6732 | ||
gamma = keras_layer.gamma if hasattr(keras_layer, "gamma") else 1.0507 | ||
alpha = keras_layer.alpha if hasattr(keras_layer, "alpha") else 1.67326324235437728481704299 | ||
gamma = keras_layer.gamma if hasattr(keras_layer, "gamma") else 1.05070098735548049341933498 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use the exact same values as the Keras project's ones.
https://github.com/keras-team/keras/blob/master/keras/activations.py#L80
alpha = 1.6732632423543772848170429916717
scale = 1.0507009873554804934193349852946
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have updated. BTW, I previously used shorter alpha and scale in order not to exceed the 100 characters per line lint limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
The weight shape of dilated conv passed from frontend to nnvm compiler should be the original h and w, not including inserted zeros.