You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since #29390, our model loading will auto start safetensor conversion from non-safetensor model and attempt to submit a PR...and crashes. Huh?
Shouldn't there be a toggle for this feature and default to False instead of auto-enabled for all?
Exception in thread Thread-autoconversion:
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
self.run()
File "/root/miniconda3/lib/python3.11/threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "/root/miniconda3/lib/python3.11/site-packages/transformers/safetensors_conversion.py", line 89, in auto_conversion
sha = get_conversion_pr_reference(api, pretrained_model_name_or_path, **cached_file_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.11/site-packages/transformers/safetensors_conversion.py", line 82, in get_conversion_pr_reference
sha = f"refs/pr/{pr.num}"
^^^^^^
AttributeError: 'NoneType' object has no attribute 'num'
Expected behavior
Not crash and burn
Not auto start safetensor conversion
The text was updated successfully, but these errors were encountered:
It was reverted, feel free to update your clone. This call should only happen in the background with no effect to your runtime, the conversion happens server side.
This is to continue pushing safetensors forward as an alternative to raw .bin checkpoints as we see more bad actors with pure .bin checkpoints. Thanks again for your report!
System Info
Ubuntu 22.04
Torch 2.2.1
Latest git/head Transformers
Who can help?
@LysandreJik @ArthurZucker
Reproduction
Since #29390, our model loading will auto start safetensor conversion from non-safetensor model and attempt to submit a PR...and crashes. Huh?
Shouldn't there be a toggle for this feature and default to False instead of auto-enabled for all?
Expected behavior
The text was updated successfully, but these errors were encountered: