Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Error #44

Open
wangyang-stu opened this issue Nov 22, 2023 · 2 comments
Open

Running Error #44

wangyang-stu opened this issue Nov 22, 2023 · 2 comments

Comments

@wangyang-stu
Copy link

When launching finetune. py using the following command:
CUDA_VISIBLE_DEVICES=0,1,2,3,4 accelerate launch finetune.py --output-dir output/yarn-7b-64k --model /data/wy/llm_base/Llama-2-7b-hf --dataset /data/wy/LLMScaledData/pg_books-tokenized-bos-eos-chunked-6/data

The following error occurred:
Traceback (most recent call last):
File "/data/wy/yarn/finetune.py", line 293, in
main(args.parse_args())
File "/data/wy/yarn/finetune.py", line 156, in main
model.gradient_checkpointing_enable()
File "/home/centos/anaconda3/envs/llm_sacled/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'DistributedDataParallel' object has no attribute 'gradient_checkpointing_enable'

Need to modify 'model.gradient_checkpointing_enable()' to 'model.module.gradient_checkpointing_enable()'

@ichsan2895
Copy link

I got same problem. Did modify model.gradient_checkpointing_enable() to model.module.gradient_checkpointing_enable() solve the problem?

@18140663659
Copy link

model.module.gradient_checkpointing_enable()

same question

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants