-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Half float checkpoint #797
Conversation
…ations, added half to linear, softmax layer, cnn layers, opt module. tested examples naive, module, train_cnn on mlp and cnn, refactored broadcast, added benchmark for fp16 vs fp32
Usage example:
|
Codecov Report
@@ Coverage Diff @@
## dev #797 +/- ##
==========================================
- Coverage 70.05% 63.74% -6.31%
==========================================
Files 100 87 -13
Lines 11573 4904 -6669
==========================================
- Hits 8107 3126 -4981
+ Misses 3466 1778 -1688
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
@@ -0,0 +1,4575 @@ | |||
// half - IEEE 754-based half-precision floating-point library. | |||
// |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
put it under singa/core/ ?
added half cpp backend, half tensor conversion, half tensor math operations, added half to linear, softmax layer, cnn layers, opt module. tested examples naive, module, train_cnn on mlp and cnn, refactored broadcast, added benchmark for fp16 vs fp32