Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to fold batch norm in Conv2D #39

Closed
tu1258 opened this issue Oct 11, 2020 · 2 comments
Closed

How to fold batch norm in Conv2D #39

tu1258 opened this issue Oct 11, 2020 · 2 comments

Comments

@tu1258
Copy link

tu1258 commented Oct 11, 2020

So I would like to fuse batch norm in Conv2D before doing full integer quantization. Is that doable? I've read the article on https://qiita.com/PINTO/items/865250ee23a15339d556 but didn't find the information. Maybe someone here could help me, thanks!

@PINTO0309
Copy link
Owner

PINTO0309 commented Oct 12, 2020

It appears to be automatically merged at quantization time.

Before

before

After

after

@tu1258
Copy link
Author

tu1258 commented Oct 12, 2020

Much thanks to @PINTO0309 !

@tu1258 tu1258 closed this as completed Oct 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants