AI inference & memory systems on AMD ROCm | Custom Triton kernels | Hierarchical retrieval | MoE architectures
- Washington State
- https://linkedin.com/in/garyjduncan
Popular repositories Loading
-
mud-puppy
mud-puppy PublicROCm-first LLM fine-tuning framework — LoRA, QLoRA, DPO/GRPO, GPTQ, ZeRO-Offload. No bitsandbytes dependency.
Python
-
pensive
pensive PublicHierarchical context retrieval for LLMs — L1/L2/L3 cache tiers, spreading activation, FAISS+BM25 hybrid search, 97.9% accuracy at sub-30ms
Python
-
aegis
aegis PublicAI cognitive architecture: custom Triton INT4 inference kernels, hierarchical memory (Pensive), MoE routing, constitutional alignment — built on AMD ROCm
HIP
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.