Skip to content

Is it possible to use asimd with half-precision float on armv8a or neon on armv7a for accelerating ? #1181

@aswywy

Description

@aswywy

Hi:
I'm using openblas for Neural Network on mobile with arm.
I use caffe1 with openblas, and it works well,but not quick enough.
It seems that Neural Network do not need float operation with full-precision,half-precision is also fine,
even 8-bit-precision is ok.
Is it possible to use asimd with half-precision float on armv8a or neon on armv7a for accelerating primitive openration such as convlution, matrix multiplation?
thanks very much.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions