Support for 16-Bit Floats #27
GiorgosXou
started this conversation in
Ideas
Replies: 2 comments 5 replies
-
|
Beta Was this translation helpful? Give feedback.
0 replies
-
This feel so wrong even though it's working... ah... so, by tweeking this lovely operator float() const;
bool operator < (const int& i);
float16 operator / (const int& i);
float16 operator / (const long int& li);
float16 operator / (const unsigned int& ui);
float16 operator * (const double& d); (I imagine probably in the worst way I could implement them) #define DFLOAT float16
#define DFLOAT_LEN 4
// (and some castings) I was able to run a NN with |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
So recently, I started playing around with tensorflow again, and realised there's
tf.keras.backend.set_floatx('float16')
. So why not?Beta Was this translation helpful? Give feedback.
All reactions