Peat, a Python-based Intel-Optimized Tensorflow dockerization with CPU & Memory constraints configurator
-
Updated
Aug 5, 2020 - Python
Peat, a Python-based Intel-Optimized Tensorflow dockerization with CPU & Memory constraints configurator
Learned Approximate Matrix Profile (LAMP) implementation on Ultra96-v2 board
Open source RTL simulation acceleration on commodity hardware
Scalable linear regression for multi-GPU, TPU training with PyTorch
Running XOR encoder
Model LLM inference on single-core hardware architectures
ML benchmarks performance featuring LightOn's Optical Processing Unit (OPU) vs CPU and GPU.
Deep learning library that exports itself to HDL code for FPGA-based hardware acceleration
Memristor model: Various implementations of the simplified memristor model "JART-TUD VCM"
Installing hardware-accelerated PyTorch with Poetry on different hardware using the same `pyproject.toml`
Measure floating point operations per second on your device
Single Shot MultiBox Detector deployed on a OAK-D Lite cam via DepthAI
A Tool for Parallel Processing of ROS2 Hardware Acceleration on Zynq
An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.
Real time, battery powered, Convolutional Neural Net inferencing on the Movidius NCS and a Raspbery Pi using a webcam
Implementation of a compact optical neural network SqueezeLight based on multi-operand micro-rings, DATE 2021
Hardware accelerated OpenCV, Torch & Tensorrt Ubuntu 20.04 docker images for Jetson Nano containing any python version you need up until the latest 3.12 with ultralytics yolov10 tensorrt support
Design Space Exploration (DSE) simulator for binary neural network accelerator
Add a description, image, and links to the hardware-acceleration topic page so that developers can more easily learn about it.
To associate your repository with the hardware-acceleration topic, visit your repo's landing page and select "manage topics."