Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QDense_batchnorm? #64

Open
thesps opened this issue Mar 15, 2021 · 2 comments
Open

QDense_batchnorm? #64

thesps opened this issue Mar 15, 2021 · 2 comments
Assignees

Comments

@thesps
Copy link

thesps commented Mar 15, 2021

I see you have qconv2d_batchnorm layer which folds the weights of the two layers and then quantizes.

We're bringing support for that to hls4ml, and it should help us save some resources & latency.

I'm wondering, do you plan to add the equivalent combined QDense + BatchNormalization layer to QKeras?

@zhuangh
Copy link
Contributor

zhuangh commented Mar 15, 2021

@thesps great and glad to see this helps! Yes, QDenseBatchnorm is one of our TODO items, but we have other higher priority tasks at this moment.

@zhuangh
Copy link
Contributor

zhuangh commented Jun 17, 2021

#74

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants