-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bitwise AND Tensor #218
Bitwise AND Tensor #218
Conversation
Also includes fixing f16 and f32 datatype of HOST
rpp_hip_math_multiply8_const(src1_f8, src1_f8, (float4)255); | ||
rpp_hip_math_multiply8_const(src2_f8, src2_f8, (float4)255); | ||
dst_f8->f1[0] = (float)((uchar)(src1_f8->f1[0]) & (uchar)(src2_f8->f1[0])); | ||
dst_f8->f1[1] = (float)((uchar)(src1_f8->f1[1]) & (uchar)(src2_f8->f1[1])); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please verify with OpenVX, OpenCV and previous RPP if floats are being converted to uchars like here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Found this link wherein they convert floats to integers before bitwise operation
https://stackoverflow.com/questions/1723575/how-to-perform-a-bitwise-operation-on-floating-point-numbers
Usually bitwise operations are performed on integers or char datatypes and not directly on float.
I have also checked to see if anyone in our team have used bitwise operations with float value in trainings and didn't find any such case so far.
No description provided.