Refactor fused MLP + fused attention loading. Fix for fused MLP requiring Triton even when not used. #85
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a fix for: #43 (comment)
Changes:
modeling/_base.py
, add new classmethodsget_fused_attention_module
andget_fused_mlp_module
. if called directly they return "this class does not support" warnings.modeling/llama.py
, add same classmethods, implementing the imports ofFusedLlamaMLPForQuantizedModel
andFusedLlamaAttentionForQuantizedModel
with try except blocksmodeling/_base.py
implement checks forinject_fused_attention
andinject_fused_mlp
which only callget_fused_mlp_module
andget_fused_attention_module
if right conditions are met. In particular, don't callget_fused_mlp_module
unlessuse_triton
is True.FusedLlamaMLPForQuantizedModel
will not be imported unless the user specifiesuse_triton
andinject_fused_mlp
Testing done: