New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NLP refactoring - Stage 2 #368
Conversation
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
This pull request introduces 2 alerts and fixes 1 when merging 8be0691 into f072029 - view on LGTM.com new alerts:
fixed alerts:
|
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
This pull request introduces 7 alerts and fixes 1 when merging ce70f26 into f072029 - view on LGTM.com new alerts:
fixed alerts:
|
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
This pull request introduces 9 alerts and fixes 1 when merging 0820752 into f072029 - view on LGTM.com new alerts:
fixed alerts:
|
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
This pull request introduces 9 alerts and fixes 1 when merging 5b74599 into f072029 - view on LGTM.com new alerts:
fixed alerts:
|
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
…ctoring_stage2
This pull request introduces 9 alerts and fixes 1 when merging 60f6e3c into 142bed9 - view on LGTM.com new alerts:
fixed alerts:
|
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
This pull request introduces 7 alerts and fixes 1 when merging 4428f37 into 142bed9 - view on LGTM.com new alerts:
fixed alerts:
|
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
This pull request introduces 2 alerts and fixes 1 when merging 8b1d72d into 142bed9 - view on LGTM.com new alerts:
fixed alerts:
|
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
This pull request introduces 2 alerts and fixes 1 when merging 79cb8f0 into 142bed9 - view on LGTM.com new alerts:
fixed alerts:
|
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
This pull request introduces 2 alerts and fixes 1 when merging 04311df into 142bed9 - view on LGTM.com new alerts:
fixed alerts:
|
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
This pull request introduces 2 alerts and fixes 1 when merging 563b69b into 142bed9 - view on LGTM.com new alerts:
fixed alerts:
|
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
Merged PaddedSmoothedCrossEntropyLossNM with MaskedLanguageModelingLossNM into unified SmoothedCrossEntropyLossNM. Moved SmoothedCrossEntropyLoss into the file for SmoothedCrossEntropyLossNM. Signed-off-by: VahidooX <vnoroozi@nvidia.com>
Signed-off-by: VahidooX <vnoroozi@nvidia.com>
…_refactoring_stage2
This pull request introduces 2 alerts and fixes 1 when merging 9caa0c6 into 49bf035 - view on LGTM.com new alerts:
fixed alerts:
|
… into nlp_refactoring_stage2
This pull request introduces 1 alert and fixes 1 when merging 88a95b5 into 49bf035 - view on LGTM.com new alerts:
fixed alerts:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, could you please fix few minor issues with docstrings for loss modules
Signed-off-by: VahidooX <vnoroozi@nvidia.com>
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
This pull request introduces 1 alert and fixes 1 when merging ac9f023 into 553b0ae - view on LGTM.com new alerts:
fixed alerts:
|
Signed-off-by: VahidooX <vnoroozi@nvidia.com>
… into nlp_refactoring_stage2
This pull request introduces 1 alert and fixes 1 when merging 47b8f5c into 553b0ae - view on LGTM.com new alerts:
fixed alerts:
|
Signed-off-by: Evelina Bakhturina <ebakhturina@nvidia.com>
This pull request introduces 1 alert and fixes 1 when merging bf2d7b2 into 553b0ae - view on LGTM.com new alerts:
fixed alerts:
|
Signed-off-by: VahidooX <vnoroozi@nvidia.com>
Signed-off-by: VahidooX <vnoroozi@nvidia.com>
This pull request fixes 1 alert when merging 5745b56 into 553b0ae - view on LGTM.com fixed alerts:
|
Signed-off-by: VahidooX <vnoroozi@nvidia.com>
This pull request introduces 1 alert and fixes 1 when merging 6afa104 into 553b0ae - view on LGTM.com new alerts:
fixed alerts:
|
"""Returns definitions of module input ports. | ||
""" | ||
return { | ||
"logits": NeuralType(('B', 'T', 'D'), LogitsType()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we rename this to log_probabilities?
|
||
|
||
class TRADEMaskedCrossEntropy(LossNM): | ||
class MaskedXEntropyLoss(LossNM): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we consistent in using either XEntropy or CrossEntropy? I vote for CrossEntropy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it to MaskedLogLoss.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not MaskedCrossEntropyLoss?
def _compute_softmax(scores): | ||
"""Compute softmax probability over raw logits.""" | ||
if not scores: | ||
return [] | ||
|
||
max_score = None | ||
for score in scores: | ||
if max_score is None or score > max_score: | ||
max_score = score | ||
|
||
exp_scores = [] | ||
total_sum = 0.0 | ||
for score in scores: | ||
x = math.exp(score - max_score) | ||
exp_scores.append(x) | ||
total_sum += x | ||
|
||
probs = [] | ||
for score in exp_scores: | ||
probs.append(score / total_sum) | ||
return probs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When would we ever want to do this without going through numpy or torch?
Signed-off-by: Evelina Bakhturina ebakhturina@nvidia.com
Stage 2 of NLP refactoring:
++ Added weighting option to LossAggregatorNM
++ Moved LossAggregatorNM to the losses.py in the backend common
++ Splited JointIntentSlotLoss into two separate common losses and removed it
++ Merged MaskedLanguageModelingLossNM, PaddedSmoothedCrossEntropyLossNM and SmoothedCrossEntropyLoss into a unified loss SmoothedCrossEntropyLoss
++ Changed QuestionAnsweringLoss to a more general name SpanningLoss
++ Changed TRADEMaskedCrossEntropy to a more general name MaskedXEntropyLoss
++ Removed TokenClassificationLoss, CrossEntropyLoss3D and JointIntentSlotLoss
++ Added weighting and masking support to CrossEntropyLossNM
++ Added dynamic port sizes to CrossEntropyLossNM
++ Changed CrossEntropyLoss to CrossEntropyLossNM to prevent confusion with pytorch's CrossEntropyLoss