Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[VTA][HotFix] Relay->VTA quantization fix #4433

Merged
merged 2 commits into from Nov 27, 2019
Merged

Conversation

tmoreau89
Copy link
Contributor

This addresses a compilation bug introduced by: #4295

Other than interface changes to quantization pass (graph vs. module) it appears that this has broken quantization pass for VTA by inserting multiplication at the end of the layer when multiplication is not supported by VTA (and instead must rely on shift and add).

Investigation in progress.

@vinx13 @ZihengJiang

@tmoreau89
Copy link
Contributor Author

@liangfu This is the follow up to the fix you requested. Right now quantization breaks compilation to VTA, so this will require further investigation.

@vinx13
Copy link
Member

vinx13 commented Nov 27, 2019

@tmoreau89 Default opt_level has been changed to 2, batch_norm won't be folded during quantization, will this cause issue in vta? You can still wrap quantize call within a build_config scope if that's needed. I did this because running calibration with bn folded caused some accuracy issues

@tmoreau89
Copy link
Contributor Author

batch_norm won't be folded during quantization, will this cause issue in vta

bingo - that's very likely the root issue which explains why we have multiplication in there

@tmoreau89
Copy link
Contributor Author

@vinx13 looks like the opt_level fix does the trick. Issuing the fix right now.

@tmoreau89 tmoreau89 changed the title [WIP][VTA][HotFix] Relay->VTA quantization fix [VTA][HotFix] Relay->VTA quantization fix Nov 27, 2019
@tmoreau89
Copy link
Contributor Author

Hotfix should be ready for review.

Copy link
Member

@liangfu liangfu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yzhliu
Copy link
Member

yzhliu commented Nov 27, 2019

Thanks @tmoreau89 @vinx13 @liangfu this is merged and will be included in v0.6.0 release

yzhliu pushed a commit that referenced this pull request Nov 27, 2019
* relay -> vta fix

* setting optlevel to 3 for quantization to fold batchnorm
Leo-arm pushed a commit to Leo-arm/tvm that referenced this pull request Nov 29, 2019
* relay -> vta fix

* setting optlevel to 3 for quantization to fold batchnorm
zxy844288792 pushed a commit to zxy844288792/tvm that referenced this pull request Dec 13, 2019
* relay -> vta fix

* setting optlevel to 3 for quantization to fold batchnorm
zxy844288792 pushed a commit to neo-ai/tvm that referenced this pull request Dec 13, 2019
* relay -> vta fix

* setting optlevel to 3 for quantization to fold batchnorm
@tmoreau89 tmoreau89 deleted the tutorial_fix branch February 13, 2020 21:25
tqchen pushed a commit to tqchen/tvm that referenced this pull request Mar 29, 2020
* relay -> vta fix

* setting optlevel to 3 for quantization to fold batchnorm
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants