Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Disable Floating-point distributive property #2418

Closed
liwchang opened this issue Jan 11, 2019 · 3 comments
Closed

[RFC] Disable Floating-point distributive property #2418

liwchang opened this issue Jan 11, 2019 · 3 comments

Comments

@liwchang
Copy link
Contributor

TVM seems applying floating-point distributive property (HalideIR\src\arithmetic\Simplify.cc, Around Line 845-86 ).
In general, floating point should allow only commutative property, not associative or distributive.

Applying distributive property in floating point could significantly change rounding error, and impact accuracy.
And we did observe that in some models.

One way to fix it is to check the type and bypass some rules for floating point.
Another is a flag to enable/disable it if someone really want to trade tiny speed with accuracy.
(Default should disable floating-point associative and distributive) i

Halide (at least the current master) seems doing both.

-L

@liwchang liwchang changed the title Floating-point distributive property Floating-point distributive property is applied Jan 11, 2019
@tqchen
Copy link
Member

tqchen commented Jan 11, 2019

I think it is fine to disable floating point simplification as we mainly only need integer analysis

@tqchen tqchen changed the title Floating-point distributive property is applied [RFC] Disable Floating-point distributive property Jan 17, 2019
@tqchen
Copy link
Member

tqchen commented Feb 12, 2019

Consolidate this issue to #2588

@tqchen tqchen closed this as completed Feb 12, 2019
@Ravenwater
Copy link

For what it is worth, the posit number system restores associative and distributive properties for floating point. The culprit is IEEE floating point rounding rules. The solution requires special hardware which is available as of Dec 2017, and we are trying to incorporate this hw into VTA. The benefit is significant: an 8-bit posit beats a 32-bit IEEE float in terms of training accuracy. And since most models are memory bound, we get a big boost in performance as well. Of course, an 8-bit posit is still slower than an 8-bit integer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants