Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training in mixed precision #3

Closed
bdalal opened this issue Mar 22, 2021 · 3 comments
Closed

Training in mixed precision #3

bdalal opened this issue Mar 22, 2021 · 3 comments
Labels
question Further information is requested

Comments

@bdalal
Copy link

bdalal commented Mar 22, 2021

❓ Questions and Help

Before asking:

  1. search the issues.
  2. search the docs.

What is your question?

Hi, thanks for the amazing contribution!
I'm trying to use IBert from Huggingface/transformers (4.4.2) in my own training pipeline where I'm fine-tuning in quant mode with mixed precision (Using pytorch's cuda.amp module). This results in overflows in the QuantLinear layers, which causes following training to break due to nans. I'm considering artificially clamping the weights to a smaller range to avoid this or using a lower bit precision (from 8 to say 4) while fine-tuning.

I'm wondering if you have tried this or have any suggestions about my approaches that could help me train effectively.

Thanks.

Code

 with autocast(enabled=grad_scaler.is_enabled()):
        # TRAINING CODE...

I'm unable to post any more code (proprietary stuff, sorry!), but I can provide some specifics if you need them.

What have you tried?

What's your environment?

  • fairseq Version (e.g., 1.0 or master):
  • PyTorch Version (e.g., 1.0): 1.8.0
  • OS (e.g., Linux): Ubuntu 18.04
  • How you installed fairseq (pip, source):
  • Build command you used (if compiling from source):
  • Python version:
  • CUDA/cuDNN version: 10.1/7.6.5
  • GPU models and configuration:
  • Any other relevant information:
@bdalal bdalal added the question Further information is requested label Mar 22, 2021
@bdalal
Copy link
Author

bdalal commented Mar 22, 2021

I manually disabled autocasting in the linear blocks and got the forward pass to work, but I'm getting nans now in the backward pass.
I'll update with more details if I'm able to get the train the model in a stable manner in mixed precision.

@kssteven418
Copy link
Owner

kssteven418 commented Mar 31, 2021

Hi, thanks for your interest and I apologize for my late response.
Do you still encounter the same problem?
Since we haven't tried with lower precision than the default 8-bit setting, we have not encountered the same issue.
It is probably because we haven't taken into account lower bit precisions when writing the code and there might be some corner cases we haven't debugged.
In case you have already had a solution for this, we would also greatly appreciate it if you could please open up a PR for it.

@bdalal
Copy link
Author

bdalal commented Mar 31, 2021

Thanks for your response!

I got around that issue by disabling autocasting in the linear blocks but I realized after doing that, that it defeats the purpose of mixed precision training because most of the computations are in the linear layers which are in fp32 (after the disabling), so it yielded no improvement in training time.
I've given up on it for now because I had to move on to other things, but I'll definitely provide an update if I get it working in the future.

@bdalal bdalal closed this as completed Mar 31, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants