Low level http service for image recognition. Written in rust with CUDA support.
-
Updated
Dec 8, 2022 - Rust
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Low level http service for image recognition. Written in rust with CUDA support.
Toying around with ArrayFire in Rust
从零自制深度学习推理框架(Rust语言版). Rust version for the famous public projects https://github.com/zjhellofss/KuiperInfer and https://github.com/zjhellofss/kuiperdatawhale.
Deep Learning library written in Rust (OpenCL, CUDA & CPU)
Rust implementation of image processing library with CUDA
Graph Manipulation Library For GPUs, CPUs, and FPGAs via CUDA, OpenCL, and oneAPI
ECE 459: Programming for Performance, University of Waterloo
Implementing a parser and generator for APL, to any other language
Sparse Matrix Library for GPUs, CPUs, and FPGAs via CUDA, OpenCL, and oneAPI
Gradient Descent Optimizers and Genetic Algorithms using GPUs, CPUs, and FPGAs via CUDA, OpenCL, and oneAPI
An ablation study on the transformer network for Natural Language Processing
Rust bindings to the nvJPEG library.
Neural Networks with Sparse Weights in Rust using GPUs, CPUs, and FPGAs via CUDA, OpenCL, and oneAPI
Created by Nvidia
Released June 23, 2007