SparseLLM/ReluLLaMA-7B · Powerinfer - faster CPU inference #174
Labels
llm
Large Language Models
llm-inference-engines
Software to run inference on large language models
ml-inference
Running and serving ML models.
sparse-computation
ReLu llm's like mixtral moe
ReluLLaMA-7B
Model creator: Meta
Original model: Llama 2 7B
Fine-tuned by: THUNLP and ModelBest
Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs). Among various approaches, the mixture-of-experts (MoE) method, exemplified by models like Mixtral, has shown particular promise. MoE works by selectively activating different model components (experts), thus optimizing resource usage.
Recent studies (Zhang el al., 2021; Liu et al., 2023; Mirzadeh et al., 2023) reveal that LLMs inherently exhibit properties conducive to sparse computation when employing the ReLU activation function. This insight opens up new avenues for model efficiency, akin to MoE's selective activation. By dynamically choosing model parameters for computation, we can substantially boost efficiency.
However, the widespread adoption of ReLU-based models in the LLM field remains limited. Referring to the transformation methods from existing works (Zhang el al., 2021; Mirzadeh et al., 2023), we convert existing models to ReLU-activated versions through fine-tuning. We hope these open-source ReLU LLMs could promote the development of sparse LLMs.
The text was updated successfully, but these errors were encountered: