New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for half-precision (16-bit) floating-point datasets #587
Conversation
clang-format is version 12 here: https://github.com/BlueBrain/HighFive#code-formatting |
#588 begs to differ. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for all the work!
Codecov Report
@@ Coverage Diff @@
## master #587 +/- ##
=========================================
Coverage ? 80.57%
=========================================
Files ? 66
Lines ? 3572
Branches ? 0
=========================================
Hits ? 2878
Misses ? 694
Partials ? 0 Continue to review full report at Codecov.
|
Description
Although 16-bit float is not a native C++ type, it is widely used to save runtime and disk space in situations where high precision is not needed, most notably in deep-learning applications. Python's
h5py
trivially supports the creation offloat16
datasets (type "<f2"
). This PR adds optional support for half-precision in HighFive.Code change summary
include/highfive/bits/H5DataType_misc.hpp
, where theAtomicType
constructor was specialized forfloat16_t
numerical_test_types
was augmented by addingfloat16_t
(if enabled)create_dataset_half_float.cpp
insrc/examples
How to test this?
Download the (single-header)
Half
library from http://half.sourceforge.net/, and copyhalf.hpp
into/usr/local/include/
.Then build and run unit tests:
cmake .. -DHIGHFIVE_USE_HALF_FLOAT=ON make -j8 make test
You should see tests with
numerical_test_types - 13
, which corresponds tofloat16_t
Test System