FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.
-
Updated
Apr 22, 2024 - Python
FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.
Commented source code of Texas Instrument's original Speak and Spell™
These are VHDL codes for a signed 4bit multiplier using 4bit adders. Base on Baugh-Wooley Method.
This emulates the 4bit computer and be able to run on the browser.
A Raspberry Pi Pico (RP2040)-based 2114 SRAM Emulator for the Busch 2090 Microtronic Computer System
Supporting code for "LLMs for your iPhone: Whole-Tensor 4 Bit Quantization"
4 bit ALU in verilog
A cycle-accurate VHDL model for COP400 devices
a 4 bit TTL computer
Add a description, image, and links to the 4bit topic page so that developers can more easily learn about it.
To associate your repository with the 4bit topic, visit your repo's landing page and select "manage topics."