You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The classifier should improve over each epochs. In this case it stay at the same accuracy and loss, it just vary with more or less 5% accuracy.
To compare, I tried to run the same code but with TFFlaubertForSequenceClassification.from_pretrained("jplu/tf-flaubert-base-cased") and it worked as expected.
Environment info
transformers version: 2.5.1
Platform: Linux-4.9.0-12-amd64-x86_64-with-debian-9.12 (Google AI Platform)
Python version: 3.7.6
PyTorch version (GPU?): 1.4.0 (True)
Tensorflow version (GPU?): 2.1.0 (True)
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
For information, I already posted this problem on Stack Overflow which lead me here.
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
馃悰 Bug
Information
Model I am using (Bert, XLNet ...):
jplu/tf-camembert-base
Language I am using the model on (English, Chinese ...):
French
The problem arises when using:
The tasks I am working on is:
To reproduce
Steps to reproduce the behavior:
class_weight
or under-sample biggest classes (accuracy and loss change but still don't improve over epochs)Expected behavior
The classifier should improve over each epochs. In this case it stay at the same accuracy and loss, it just vary with more or less 5% accuracy.
To compare, I tried to run the same code but with
TFFlaubertForSequenceClassification.from_pretrained("jplu/tf-flaubert-base-cased")
and it worked as expected.Environment info
transformers
version: 2.5.1For information, I already posted this problem on Stack Overflow which lead me here.
The text was updated successfully, but these errors were encountered: