You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @patrickvonplaten
I encountered some error while following this blog to fine tune a Chinese dataset.
After I solved the last problem (RuntimeError:element 0 of tensor does not require grad and does not have a grad_fn),
I encountered this problem,as follows:
I don't know how to solve it
Traceback (most recent call last):
File "/media/xzw/WORK/fairseq/work/3.py", line 220, in
trainer.train()
File "/home/xzw/anaconda3/envs/work/lib/python3.7/site-packages/transformers/trainer.py", line 1092, in train
self.scaler.step(self.optimizer)
File "/home/xzw/anaconda3/envs/work/lib/python3.7/site-packages/torch/cuda/amp/grad_scaler.py", line 318, in step
assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer."
AssertionError: No inf checks were recorded for this optimizer.
0%| | 0/112620 [00:02<?, ?it/s]
Best regards
xiao
The text was updated successfully, but these errors were encountered:
Hi
@patrickvonplaten
I encountered some error while following this blog to fine tune a Chinese dataset.
After I solved the last problem (RuntimeError:element 0 of tensor does not require grad and does not have a grad_fn),
I encountered this problem,as follows:
I don't know how to solve it
Best regards
xiao
The text was updated successfully, but these errors were encountered: