New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ignore the updates when weights are 0s and return the default value #283
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
facebook-github-bot
added
CLA Signed
This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
fb-exported
labels
May 3, 2022
This pull request was exported from Phabricator. Differential Revision: D36114064 |
yachyv7
added a commit
to yachyv7/torchrec
that referenced
this pull request
May 4, 2022
…ytorch#283) Summary: Pull Request resolved: pytorch#283 For a multi-task multi-label (MTML) model, sometimes we intentionally set weights = 0 for the model effectively ignore the data. In terms of metrics calculation, we should - ignore this update if weights for all tasks are 0 - ignore the metric result and output 0 (metric's default value) if the weights for a tasks are 0 Previously if weights = 0, there would be some NAN values for metrics and triggered metric health related alerts. This change fixes it Differential Revision: D36114064 fbshipit-source-id: bb684849f737fa9a68eeae7c76509c5656818b34
This pull request was exported from Phabricator. Differential Revision: D36114064 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D36114064 |
yachyv7
added a commit
to yachyv7/torchrec
that referenced
this pull request
May 4, 2022
…ytorch#283) Summary: Pull Request resolved: pytorch#283 For a multi-task multi-label (MTML) model, sometimes we intentionally set weights = 0 for the model effectively ignore the data. In terms of metrics calculation, we should - ignore this update if weights for all tasks are 0 - ignore the metric result and output 0 (metric's default value) if the weights for a tasks are 0 Previously if weights = 0, there would be some NAN values for metrics and triggered metric health related alerts. This change fixes it Differential Revision: D36114064 fbshipit-source-id: 0e243bd96cdc33c9ea7f5399631950fa869a1e59
yachyv7
added a commit
to yachyv7/torchrec
that referenced
this pull request
May 5, 2022
…ytorch#283) Summary: Pull Request resolved: pytorch#283 For a multi-task multi-label (MTML) model, sometimes we intentionally set weights = 0 for the model effectively ignore the data. In terms of metrics calculation, we should - ignore this update if weights for all tasks are 0 - ignore the metric result and output 0 (metric's default value) if the weights for a tasks are 0 Previously if weights = 0, there would be some NAN values for metrics and triggered metric health related alerts. This change fixes it Differential Revision: D36114064 fbshipit-source-id: e709885aebd743cd008a48debc43f31c041c5cf5
This pull request was exported from Phabricator. Differential Revision: D36114064 |
…ytorch#283) Summary: Pull Request resolved: pytorch#283 For a multi-task multi-label (MTML) model, sometimes we intentionally set weights = 0 for the model effectively ignore the data. In terms of metrics calculation, we should - ignore this update if weights for all tasks are 0 - ignore the metric result and output 0 (metric's default value) if the weights for a tasks are 0 Previously if weights = 0, there would be some NAN values for metrics and triggered metric health related alerts. This change fixes it Reviewed By: fegin Differential Revision: D36114064 fbshipit-source-id: 144b06d9bcf4954107738463cda1f41f23a88c5f
This pull request was exported from Phabricator. Differential Revision: D36114064 |
samiwilf
added a commit
to samiwilf/torchrec
that referenced
this pull request
Oct 25, 2022
…ytorch#283) Summary: X-link: facebookresearch/dlrm#283 Remove the constraint that ranks must iterate through batches of the exact same size for the exact same number of iterations. Now each rank's input batch can be a different size containing a different number of samples, and each rank can forward pass or train fewer or more batches than other ranks. Differential Revision: D40676549 fbshipit-source-id: 47174289e88d7d13339a9b16325b4275bc0aa628
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
CLA Signed
This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
fb-exported
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
For a multi-task multi-label (MTML) model, sometimes we intentionally set weights = 0 for the model effectively ignore the data. In terms of metrics calculation, we should
Previously if weights = 0, there would be some NAN values for metrics and triggered metric health related alerts. This change fixes it
Differential Revision: D36114064