-
Notifications
You must be signed in to change notification settings - Fork 421
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
TPU crash during importing Trainer from transformers #6990
Comments
The flag is from https://github.com/pytorch/xla/blob/r2.2/torch_xla/__init__.py#L43-L44, I am trying to get my kaggle TPU and see if I can repo this. |
OK I was able to confirm that it did crash. I tried to install the new torch 2.3 on my TPUVM with
and this seems to work
There is also a in flight pr to update the default torch version to 2.3. Do you mind manually install the 2.3 for now? |
Ah I know.. it is Kaggle that preinstall
fixed the issue on my end. |
I will assign this bug to @will-cromar to add a warning message to make this more clear in the future releases. |
This specific problem solved with (#6990 (comment))
|
Let us know if you're still having issues |
馃悰 Bug
The Colab/Kaggle notebook crashes while trying to import 'Trainer' from the transformers library.
To Reproduce
!pip install transformers !pip install torch_xla[tpu] -f https://storage.googleapis.com/libtpu-releases/index.html from transformers import Trainer
or
!pip install transformers !pip install torch_xla[tpu] from transformers import Trainer
or
!pip install transformers !pip install torch_xla from transformers import Trainer
Steps to reproduce the behavior:
ERROR: Unknown command line flag 'xla_latency_hiding_scheduler_rerun'
Environment
The text was updated successfully, but these errors were encountered: