Skip to content

Commit

Permalink
FIX / bnb: fix torch compatiblity issue with itemize (#30162)
Browse files Browse the repository at this point in the history
* fix torch compatiblity issues

* fix

* Update src/transformers/modeling_utils.py
  • Loading branch information
younesbelkada authored and ArthurZucker committed Apr 22, 2024
1 parent c0b306a commit 61b2143
Showing 1 changed file with 5 additions and 2 deletions.
7 changes: 5 additions & 2 deletions src/transformers/modeling_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -1159,9 +1159,12 @@ def num_parameters(self, only_trainable: bool = False, exclude_embeddings: bool
# For 4bit models, we need to multiply the number of parameters by 2 as half of the parameters are
# used for the 4bit quantization (uint8 tensors are stored)
if is_loaded_in_4bit and isinstance(param, bnb.nn.Params4bit):
total_numel.append(
param.numel() * 2 * self.hf_quantizer.quantization_config.bnb_4bit_quant_storage.itemsize
quant_storage = self.hf_quantizer.quantization_config.bnb_4bit_quant_storage
# For compatibility with older PT version - see: https://github.com/huggingface/peft/pull/1635
nb_params = (
quant_storage.itemsize if hasattr(quant_storage, "itemsize") else quant_storage.element_size()
)
total_numel.append(param.numel() * 2 * nb_params)
else:
total_numel.append(param.numel())

Expand Down

0 comments on commit 61b2143

Please sign in to comment.