-
Notifications
You must be signed in to change notification settings - Fork 723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multi class is not working #25
Comments
FYI, the minimal example had run successfully before i tried with my dataset |
Is that all of your data? The labels need to be |
sorry for not being clear! Nope! data set is like 360k rows. I just put first 4 rows for sample. Copy pasting the label value_counts again here to get a feeling about label distribution
|
My bad, I didn't notice you'd included the value counts in the original comment. I can't spot any obvious issues here that would cause this error. Are you using the latest version of Simple Transformers and PyTorch? |
Here are the various package versions. Thanks again for looking into it! |
Those seem fine. What version of Transformers are you using? This error is usually caused by having more labels than num_labels and/or having a label greater than or equal to num_labels. But I can't see either of those cases in your data. Does it work if you use the AG News dataset as in the Medium article? |
my huggingface transformers version is 2.1.1. Let me try on the AG news dataset and revert shortly |
Same error with AG News data set as well... Pasting below
Defaults for this optimization level are: |
I think this line was missing in the Medium article. Can you try it with it included? The train_df value counts should have the labels |
Hi @ThilinaRajapakse |
Great to hear that you got it to work!
I'll look into this. It should still work even if you have more columns. Unfortunately, hyperparameter tuning is still largely trial and error but I can give a couple of pointers that may be useful. For Transformers, 2-4 training epochs are usually sufficient. From my experience, good learning rates are usually 1e-4 to 5e-5 range. Those are still rough estimates, but they work as a starting point. |
Another issue came up when i tried to load the pre-trained model and predict. I was able to load the model but prediction/eval failed. Please find the below errors
|
Did you follow the same procedure for the evaluation dataset as for the training dataset? The labels in the evaluation dataset needs to be the same as the labels in the training dataset. |
yup its the same! FYI, model.predict("sample text") is working fine. model.eval_model is the one which is throwing out this issue |
What happens if you try running eval on the training dataset itself? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@ThilinaRajapakse hi
it's true the next, added
=> how & when i should provide |
You need to wrap the def f1_score_micro(y_true, y_pred):
return f1_score(y_true, y_pred, average="micro") |
Hi,
I am getting the following error while trying to train a multiclass. Any help is much appreciated!
my df_train looks like the following
text id label alpha
0 text1 0 2 a
1 text2 1 2 a
2 text3 2 3 a
3 text4 3 2 a
4 text5 4 2 a
Defaults for this optimization level are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Epoch: 0%| | 0/1 [00:00<?, ?it/s/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion
t >= 0 && t < n_classes
failed./pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [1,0,0] Assertion
t >= 0 && t < n_classes
failed./pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [2,0,0] Assertion
t >= 0 && t < n_classes
failed./pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion
t >= 0 && t < n_classes
failed./pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [4,0,0] Assertion
t >= 0 && t < n_classes
failed./pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion
t >= 0 && t < n_classes
failed./pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion
t >= 0 && t < n_classes
failed./pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion
t >= 0 && t < n_classes
failed.THCudaCheck FAIL file=/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu line=110 error=710 : device-side assert triggered
Current iteration: 0%| | 0/37501 [00:00<?, ?it/s]
Traceback (most recent call last):
File "", line 1, in
File "/home/jbabu/.local/lib/python3.7/site-packages/simpletransformers/model.py", line 142, in train_model
global_step, tr_loss = self.train(train_dataset, output_dir, show_running_loss=show_running_loss)
File "/home/jbabu/.local/lib/python3.7/site-packages/simpletransformers/model.py", line 367, in train
outputs = model(**inputs)
File "/home/jbabu/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/jbabu/.local/lib/python3.7/site-packages/transformers/modeling_bert.py", line 913, in forward
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
File "/home/jbabu/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/jbabu/.local/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/usr/local/lib/python3.7/dist-packages/apex/amp/wrap.py", line 28, in wrapper
return orig_fn(*new_args, **kwargs)
File "/home/jbabu/.local/lib/python3.7/site-packages/torch/nn/functional.py", line 2009, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/usr/local/lib/python3.7/dist-packages/apex/amp/wrap.py", line 28, in wrapper
return orig_fn(*new_args, **kwargs)
File "/home/jbabu/.local/lib/python3.7/site-packages/torch/nn/functional.py", line 1838, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:110
The text was updated successfully, but these errors were encountered: