Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
WOQ: Optimize quantization of activation (#2584)
* WOQ: Optimize quantization per-tensor/per-block of activation for lowp-mode=INT8 * Refine threshold of activation size to parallelize quantization
- Loading branch information