Hi:
I'm using openblas for Neural Network on mobile with arm.
I use caffe1 with openblas, and it works well,but not quick enough.
It seems that Neural Network do not need float operation with full-precision,half-precision is also fine,
even 8-bit-precision is ok.
Is it possible to use asimd with half-precision float on armv8a or neon on armv7a for accelerating primitive openration such as convlution, matrix multiplation?
thanks very much.