You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for the amazing contribution!
I'm trying to use IBert from Huggingface/transformers (4.4.2) in my own training pipeline where I'm fine-tuning in quant mode with mixed precision (Using pytorch's cuda.amp module). This results in overflows in the QuantLinear layers, which causes following training to break due to nans. I'm considering artificially clamping the weights to a smaller range to avoid this or using a lower bit precision (from 8 to say 4) while fine-tuning.
I'm wondering if you have tried this or have any suggestions about my approaches that could help me train effectively.
Thanks.
Code
with autocast(enabled=grad_scaler.is_enabled()):
# TRAINING CODE...
I'm unable to post any more code (proprietary stuff, sorry!), but I can provide some specifics if you need them.
What have you tried?
What's your environment?
fairseq Version (e.g., 1.0 or master):
PyTorch Version (e.g., 1.0): 1.8.0
OS (e.g., Linux): Ubuntu 18.04
How you installed fairseq (pip, source):
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version: 10.1/7.6.5
GPU models and configuration:
Any other relevant information:
The text was updated successfully, but these errors were encountered:
I manually disabled autocasting in the linear blocks and got the forward pass to work, but I'm getting nans now in the backward pass.
I'll update with more details if I'm able to get the train the model in a stable manner in mixed precision.
Hi, thanks for your interest and I apologize for my late response.
Do you still encounter the same problem?
Since we haven't tried with lower precision than the default 8-bit setting, we have not encountered the same issue.
It is probably because we haven't taken into account lower bit precisions when writing the code and there might be some corner cases we haven't debugged.
In case you have already had a solution for this, we would also greatly appreciate it if you could please open up a PR for it.
I got around that issue by disabling autocasting in the linear blocks but I realized after doing that, that it defeats the purpose of mixed precision training because most of the computations are in the linear layers which are in fp32 (after the disabling), so it yielded no improvement in training time.
I've given up on it for now because I had to move on to other things, but I'll definitely provide an update if I get it working in the future.
❓ Questions and Help
Before asking:
What is your question?
Hi, thanks for the amazing contribution!
I'm trying to use IBert from Huggingface/transformers (4.4.2) in my own training pipeline where I'm fine-tuning in
quant
mode with mixed precision (Using pytorch'scuda.amp
module). This results in overflows in theQuantLinear
layers, which causes following training to break due tonan
s. I'm considering artificially clamping the weights to a smaller range to avoid this or using a lower bit precision (from 8 to say 4) while fine-tuning.I'm wondering if you have tried this or have any suggestions about my approaches that could help me train effectively.
Thanks.
Code
I'm unable to post any more code (proprietary stuff, sorry!), but I can provide some specifics if you need them.
What have you tried?
What's your environment?
pip
, source):The text was updated successfully, but these errors were encountered: