Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for qdense_batchnorm in QKeras #74

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

julesmuhizi
Copy link

Add support for qdense_batchnorm by folding qdense kernel with batchnorm parameters, then computing qdense_batchnorm output using the qdense inputs and folded kernel

@googlebot
Copy link
Collaborator

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed (or fixed any issues), please reply here with @googlebot I signed it! and we'll verify it.


What to do if you already signed the CLA

Individual signers
Corporate signers

ℹ️ Googlers: Go here for more info.

1 similar comment
@google-cla
Copy link

google-cla bot commented Apr 27, 2021

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed (or fixed any issues), please reply here with @googlebot I signed it! and we'll verify it.


What to do if you already signed the CLA

Individual signers
Corporate signers

ℹ️ Googlers: Go here for more info.

@google-cla google-cla bot added the cla: no contribution license agreement not-signed label Apr 27, 2021
@zhuangh
Copy link
Contributor

zhuangh commented Apr 27, 2021

@julesmuhizi thank you so much for your PR, could you sign CLA first?

@julesmuhizi
Copy link
Author

@zhuangh I'm waiting on employer authorization for the CLA. Will do as soon as I get authorized.

@julesmuhizi
Copy link
Author

@googlebot I signed it!

@google-cla
Copy link

google-cla bot commented Apr 27, 2021

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@google-cla google-cla bot added cla: yes contribution license agreement signed and removed cla: no contribution license agreement not-signed labels Apr 27, 2021
@zhuangh zhuangh requested a review from lishanok April 27, 2021 21:07
@zhuangh
Copy link
Contributor

zhuangh commented Apr 29, 2021

@julesmuhizi thanks! could you also add a test for your code change.

@googlebot
Copy link
Collaborator

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

1 similar comment
@google-cla
Copy link

google-cla bot commented Apr 30, 2021

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@nghielme
Copy link

@zhuangh here a test --> https://gist.github.com/nicologhielmetti/84df61987476b031eb8fc6103f7e2915
@julesmuhizi and I compared the performance with a QDense + Batchnorm layer to see how much is the gap between the two. It actually resulted small, probably due to slightly difference with the quantization operations between the two versions.

@vloncar
Copy link
Contributor

vloncar commented May 5, 2021

@julesmuhizi Don't you also need to add the new layer to bn_folding_utils.py?

@zhuangh Related to folding of dense+bn, convert_to_folded_model will not include Dense layers. a bug?

@zhuangh
Copy link
Contributor

zhuangh commented May 5, 2021

thanks @vloncar

@lishanok has been reviewing this PR. we are thinking whether to add a follow-up code change for that or do it in this PR.

@lishanok
Copy link
Collaborator

lishanok commented May 5, 2021

@julesmuhizi Thank you for the commit. I reviewed it and it looked good. Not sure why your test generates different output values between folded and non-folded models. Can you write a test similar to bn_folding_test.py/test_same_training_and_prediction() where weights are set with values that quantization won't result in a loss of precision and make sure the two versions result in the same results?

@vloncar There are quite a number of utility function to modify in order to support a new folded layer type. For example, convert_folded_model_to_normal, qtools, print_qmodel_summary, bn_folding_test, model_quantize, convert_folded_model_to_normal, etc. Regarding tests, I would suggest to write tests similar to qpooling_test.py (tests for regular new layers) and bn_folding_test.py (tests specific to bn folding type of layers) to check if all the utility functions are updated to support the new layer.

@zhuangh zhuangh mentioned this pull request Jun 17, 2021
@zhuangh
Copy link
Contributor

zhuangh commented Jun 17, 2021

courtesy ping @julesmuhizi

In case you miss, there is a suggestion regarding test case from @lishanok

thanks

@julesmuhizi
Copy link
Author

Hi, I have been occupied with another project but will review and begin addressing the issues in the comment above. Thanks for the ping @zhuangh

@boubasse
Copy link

Hi,
I would like to know if this thread is still active. I'm interested in having the QDenseBatchNorm :)
Thank you for your work around QKeras, the use is really straightforward.

@zhuangh
Copy link
Contributor

zhuangh commented May 13, 2022

Hi @boubasse thanks for the reminder. @lishanok could you take a look?

@julesmuhizi
Copy link
Author

julesmuhizi commented May 13, 2022 via email

@jmduarte
Copy link
Contributor

@lishanok @zhuangh Sorry for the delay. We've added the requested test.

An (unrelated) autoqkeras test was also failing (presumably also on master) due to the same legacy optimizer issue that was fixed in 5b1fe84. So we adopted a similar solution to that for the Adam optimizer. Let us know if you want us to split that into a separate PR.

We think this is ready to be merged and a follow-up PR should handle updating the utility functions to support a new folded layer type (convert_folded_model_to_normal, qtools, print_qmodel_summary, model_quantize, convert_folded_model_to_normal, etc.) and additional tests (similar to qpooling_test.py and bn_folding_test.py).

@zhuangh
Copy link
Contributor

zhuangh commented Oct 25, 2022

Thanks, Javier

Hi Shan and Daniele, could you take a look? @lishanok and @danielemoro

@jmduarte
Copy link
Contributor

Hi all, any chance you could take a look? Thanks!

@zhuangh @lishanok @danielemoro

@lishanok
Copy link
Collaborator

lishanok commented Feb 17, 2023 via email

@jmduarte
Copy link
Contributor

Hi, @zhuangh @lishanok @danielemoro any chance you could take a look? Thank you!

@jmduarte
Copy link
Contributor

Hi @zhuangh @lishanok @danielemoro are you able to merge this?

@jmduarte
Copy link
Contributor

hi @lishanok can you take a look at this? thanks!

Copy link
Collaborator

@lishanok lishanok left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since some time has passed and our main branch has gone through some changes, could you rebase your PR with master branch and see if it passes all tests? Thanks!

@jmduarte
Copy link
Contributor

@lishanok thanks for looking! CI tests pass after rebase.

@AdrianAlan
Copy link

Hi. My team used this feature, and it worked well for us. We'd love to see it merged. Are there any blockers?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes contribution license agreement signed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants