Highly optimized inference engine for Binarized Neural Networks
-
Updated
May 24, 2024 - C++
Highly optimized inference engine for Binarized Neural Networks
A translator from Intel SSE intrinsics to Arm/Aarch64 NEON implementation
Light-weight Bare Metal Hypervisor (Type 1) written in C++
The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologies.
A Lightweight Single Header file C++ AES Library that also supports AES Hardware Acceleration Technology
TensorFlow Lite segmentation on Raspberry Pi 4 aka Unet at 4.2 FPS
TensorFlow Lite segmentation on Raspberry Pi 4 aka Unet at 7.2 FPS with 64-bit OS
Super fast face detection on Raspberry Pi 4
Recognize 2000+ faces on your Raspberry Pi 4 with database auto-fill and anti-spoofing
Face mask detection on a bare Raspberry Pi 4 wit 32 or 64-bit OS
基于QT嵌入式ARM数据采集上位机设计,再添加新功能时,要记得PR哦,这样项目才能慢慢变丰富
TensorFlow Lite SSD on bare Raspberry Pi 4 with 64-bit OS at 24 FPS
TensorFlow Lite Posenet on bare Raspberry Pi 4 with 64-bit OS at 9.4 FPS
TensorFlow Lite classification on a bare Raspberry Pi 4 with 64-bit OS at 23 FPS
TensorFlow Lite SSD on a bare Raspberry Pi 4 at 17 FPS
Add a description, image, and links to the armv8 topic page so that developers can more easily learn about it.
To associate your repository with the armv8 topic, visit your repo's landing page and select "manage topics."