PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation
-
Updated
Apr 15, 2024 - C++
PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation
Add a description, image, and links to the speculative-decoding topic page so that developers can more easily learn about it.
To associate your repository with the speculative-decoding topic, visit your repo's landing page and select "manage topics."