Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'CTCTrainer' object has no attribute 'use_amp' #45

Closed
its-ogawa opened this issue Jul 5, 2022 · 5 comments · Fixed by #58
Closed

'CTCTrainer' object has no attribute 'use_amp' #45

its-ogawa opened this issue Jul 5, 2022 · 5 comments · Fixed by #58

Comments

@its-ogawa
Copy link

Use the latest huggingsound.

#!pip list | grep huggingsound
huggingsound 0.1.4

AttributeError occurs when finetune is performed as shown in the sample below.
https://github.com/jonatasgrosman/huggingsound#fine-tuning

/usr/local/lib/python3.7/dist-packages/huggingsound/trainer.py in training_step(self, model, inputs)
432 inputs = self._prepare_inputs(inputs)
433
--> 434 if self.use_amp:
435 with torch.cuda.amp.autocast():
436 loss = self.compute_loss(model, inputs)

AttributeError: 'CTCTrainer' object has no attribute 'use_amp'

Can you find the cause?

@its-ogawa
Copy link
Author

It is not a correct solution, but I removed use_amp from the source code. That is, I commented it out and ran it.
https://github.com/jonatasgrosman/huggingsound/blob/main/huggingsound/trainer.py

        #if self.use_amp:
        # with torch.cuda.amp.autocast():
        # loss = self.compute_loss(model, inputs)
        #else:
            loss = self.compute_loss(model, inputs)
        #if self.use_amp:
        #self.scaler.scale(loss).backward()
        #elif self.deepspeed:
        if self.deepspeed:
            self.deepspeed.backward(loss)
        else:
            loss.backward()

This is a completely reckless response, and I hope you can point me to the correct way to handle this.

I don't know if this is the cause or not, but I am having a problem where evaluate fails for a finetuned model in this state.
I hope you can refer to this as well.
cf. #46

@arikhalperin
Copy link

arikhalperin commented Aug 16, 2022

It's related to transformers version. I use the latest transformers, the correct fix is to use: use_cuda_amq:

if self.use_cuda_amp:
self.scaler.scale(loss).backward()
elif self.deepspeed:
self.deepspeed.backward(loss)
else:
loss.backward()

@its-ogawa
Copy link
Author

Thanks for answering.

What version of Transformer do you have?
And how did you install it and how did you install it?

@arikhalperin
Copy link

transformers==4.21.1
pip install transformers

@nkaenzig-aifund
Copy link
Contributor

transformers changed the name of the Trainer property use_amp to use_cuda_amp in this PR:
huggingface/transformers#17138

Use transformers==4.19.2 or rename the variable as suggested by arikhalperin

nkaenzig added a commit to nkaenzig/huggingsound that referenced this issue Aug 25, 2022
support use_cuda_amp property name used in recent transformers versions

closes jonatasgrosman#45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants