-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Conversation
Add focal loss to deal with class imbalance
Sorry, it seems there are some problems. But it works fine in my local copy. |
You can see it yourself here: http://build.allennlp.org/viewLog.html?buildId=17043&buildTypeId=AllenNLP_AllenNLPPullRequests&tab=buildLog. Just log in as a guest. |
Thanks. Did not see the guest option. Will get back when I figure this out. |
It was in the wrong place. Sorry.
Finally here. Not an easy job to make these checkers happy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Largely looks great! There are just a few minor things that could use some fixing, then this is good to merge. Thanks for the contribution!
For more clear cross_entropy formulation
Combine everything into `weights` and avoid reference to local variables later.
Add `@flaky` also to token-average tests.
It seems moving where to combine the weight brings in new problem. Will look at it later. |
Avoid involving `gamma` or `alpha` in average.
More tolerance to token average so the change it complaint < 1/1000.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, thanks @guoquan!
* Add options for focal loss Add focal loss to deal with class imbalance * Fix some typing problems * Update util.py * Update util.py * Update util.py * Fix too long lines * Fix keyword argument alert * Update util.py * Update util.py * Fix a problem focal loss not activated * Add test for focal loss gamma * Fix some problem with decimal precision problems * Add focal loss alpha test * Update util_test.py * Update util.py * Update util_test.py * Address some pylint and mypy problems * Update util.py * Update util.py * Update util.py * Update util.py * Update util_test.py * Update util_test.py It was in the wrong place. Sorry. * restore not-callable after torch.tensor() * Update util_test.py For more clear cross_entropy formulation * Update util.py Combine everything into `weights` and avoid reference to local variables later. * Update util_test.py Add `@flaky` also to token-average tests. * Update util.py Avoid involving `gamma` or `alpha` in average. * Update util_test.py * Update util_test.py More tolerance to token average so the change it complaint < 1/1000.
Add focal loss to deal with class imbalance