Extremely fast color quantization. Reduce color information of a 24-bit RGB bitmap down to 8-bit.
-
Updated
May 26, 2024 - C
Extremely fast color quantization. Reduce color information of a 24-bit RGB bitmap down to 8-bit.
Neural Networks with low bit weights on low end 32 bit microcontrollers such as the CH32V003 RISC-V Microcontroller and others
🎨 Convert images to 15/16-bit RGB color with dithering
Lossy PNG compressor — pngquant command based on libimagequant library
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
Color quantization/palette generation for png images
Clean C language version of quantizing llama2 model and running quantized llama2 model
Uniform quantizer that uses mexCallMATLAB to call different MATLAB commands and plot the results
The purpose of this project is to compare different means of computing convolution operation, and see if naive quantiization actually speed ups operation.
Quantized Memory-Augmented Neural Networks (AAAI-18)
The Quantizer - A Swift-based reimplementation of ImageAlpha
Subband filtering with ADPCM
Code for "Characterising Across Stack Optimisations for Deep Convolutional Neural Networks"
Add a description, image, and links to the quantization topic page so that developers can more easily learn about it.
To associate your repository with the quantization topic, visit your repo's landing page and select "manage topics."