You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've git pulled your project and notebooks and had no problems with the 01_data_token_classifications.nbs. When I tried the 099a_multilabel_classification example, the output seemed reasonable at first. I decided to try substituting other sentences to learn.predict_blurr and got exactly the same results, not matter what I provided. My setup is similar to your test setup.
Using pytorch 1.7.1
Using fastai 2.3.1
Using transformers 4.3.2
Using GPU #0: NVIDIA GeForce GTX 1080
but because of cuda memory constraints, I set my bs=4
note that the result vector is always the same. I tried comments of different lengths too with the same resutls.
The text was updated successfully, but these errors were encountered:
Have you tried training on the full, or at least larger subset, of the dataset in that notebook?
You also may want to improve/play with the learn.loss_func.thresh. Also, maybe more training will help. I only trained on less that 1% of the data and really didn't do much in the way of tuning my LRs or threshold.
I've git pulled your project and notebooks and had no problems with the 01_data_token_classifications.nbs. When I tried the 099a_multilabel_classification example, the output seemed reasonable at first. I decided to try substituting other sentences to learn.predict_blurr and got exactly the same results, not matter what I provided. My setup is similar to your test setup.
but because of cuda memory constraints, I set my bs=4
note that the result vector is always the same. I tried comments of different lengths too with the same resutls.
The text was updated successfully, but these errors were encountered: