-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transposed convolution (deconvolution) #36
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! @vloncar
Would you mind adding the corresponding functions in utils.py, for example, quantized and save the weights, etc.
examples/example_mnist_ae.py
Outdated
x_train = x_train[..., np.newaxis] | ||
x_test = x_test[..., np.newaxis] | ||
|
||
x_train /= 255.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shall we use 256 since it is powers-of-two number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code is based on the other mnist examples and they all use 255. It should be the same, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thought: 256 is power of two and more hardware friendly for the implementation.
@zhuangh I'll make the changes you requested over the weekend. I didn't forget, I was just working on something else. |
@vloncar awesome. thanks! |
52bceb3
to
c90ef5f
Compare
@zhuangh do you have time to review this? thanks! |
c90ef5f
to
d2b2210
Compare
Thanks @vloncar . And pull into the internal system for other round of review. |
ping @lishanok |
Hi @vloncar could you also sync qkeras/autoqkeras_internal.py with the latest code? We just did some changes to it. Thanks! |
@lishanok I don't see any conflicts with the latest change (Activation and limits). |
fixed line-too-long
This PR adds
QConv2DTranspose
which is useful in autoencoders. I can also addQConv1DTranspose
but the equivalent Keras layer is only avaiable in nightly TF releases, not in stable channel yet.