Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no dependency package accelerate installed when we install transformers v4.29.1 #23323

Closed
2 of 4 tasks
PenghuiCheng opened this issue May 12, 2023 · 14 comments
Closed
2 of 4 tasks

Comments

@PenghuiCheng
Copy link

System Info

transformers v4.29.1
torch 2.0.1

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

cd examples/pytorch/text-classification
export TASK_NAME=mrpc
python run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME

the log is:
Traceback (most recent call last):
File "/home/penghuic/transformers/examples/pytorch/text-classification/run_glue.py", line 623, in
main()
File "/home/penghuic/transformers/examples/pytorch/text-classification/run_glue.py", line 217, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/home/penghuic/transformers/src/transformers/hf_argparser.py", line 346, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "", line 111, in init
File "/home/penghuic/transformers/src/transformers/training_args.py", line 1333, in post_init
and (self.device.type != "cuda")
File "/home/penghuic/transformers/src/transformers/training_args.py", line 1697, in device
return self._setup_devices
File "/home/penghuic/transformers/src/transformers/utils/generic.py", line 54, in get
cached = self.fget(obj)
File "/home/penghuic/transformers/src/transformers/training_args.py", line 1613, in _setup_devices
raise ImportError(
ImportError: Using the Trainer with PyTorch requires accelerate: Run pip install --upgrade accelerate

Expected behavior

we expect the dependency package accelerate will be installed when we install transformers.

@amyeroberts
Copy link
Collaborator

Hi @PenghuiCheng,

All of the examples have their own unique requirements, which are listed in their own requirements.txt file. The examples are demonstrative of how to perform certain tasks using the transformers library, but are not dependancies.

@muellerzr
Copy link
Contributor

@PenghuiCheng you need to do pip install transformers[torch] to ensure you're building/installing the right version

@mohamedoh
Copy link

same error ! any solution

@muellerzr
Copy link
Contributor

@mohamedoh you need to do pip install accelerate, or pip install transformers[torch]

@flckv
Copy link

flckv commented May 30, 2023

I face the same issue even after the installations

@muellerzr
Copy link
Contributor

@flckv if you're in a notebook or similar you'll need to restart the session. Does pip show accelerate show anything? (This is a sign)

@Krish1375
Copy link

Krish1375 commented May 31, 2023

pip show accelerate
It shows version 0.19.0 but still getting the error
ImportError: Using the Trainer with PyTorch requires accelerate: Run pip install --upgrade accelerate
on both Colab as well as Jupyter

@muellerzr
Copy link
Contributor

muellerzr commented May 31, 2023

@Krish1375 you may need to restart the notebook session to use the new/installed lib

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot closed this as completed Jul 2, 2023
@shahabty
Copy link

shahabty commented Aug 1, 2023

I have the same issue but apparently, there is no solution for it.

@TheoStaicov
Copy link

Did exactly what was written on colab.research:
https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing#scrollTo=cg3fiQOvmI3Q

ran the first cell, was ok.
ran the 2nd cell, got this error message:
ImportError: Using load_in_8bit=True requires Accelerate: pip install accelerate and the latest version of bitsandbytes pip install -i https://test.pypi.org/simple/ bitsandbytes or pip install bitsandbytes`

@alkollo
Copy link

alkollo commented Aug 23, 2023

Did exactly what was written on colab.research: https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing#scrollTo=cg3fiQOvmI3Q

ran the first cell, was ok. ran the 2nd cell, got this error message: ImportError: Using load_in_8bit=True requires Accelerate: pip install accelerate and the latest version of bitsandbytes pip install -i https://test.pypi.org/simple/ bitsandbytes or pip install bitsandbytes`

Got the same problem as well, any solution ?

@Olivier-aka-Raiden
Copy link

Olivier-aka-Raiden commented Aug 24, 2023

I have same issue, nothing worked for me and accelerate is never detected by transformers.
Edit: I suggest downgrading to transformers 4.30.1 and accelerate 0.21.0. It worked for me and I will wait they fix the dependencies

@alkollo
Copy link

alkollo commented Aug 24, 2023

Did exactly what was written on colab.research: https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing#scrollTo=cg3fiQOvmI3Q
ran the first cell, was ok. ran the 2nd cell, got this error message: ImportError: Using load_in_8bit=True requires Accelerate: pip install accelerate and the latest version of bitsandbytes pip install -i https://test.pypi.org/simple/ bitsandbytes or pip install bitsandbytes`

Got the same problem as well, any solution ?

My bad, just realized I did not run on GPU, had to switch to GPU mode in collab settings. Now worked. Check your environnment and be sure you are on aGPU (CPU by default).
Good luck all.
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants