Skip to content

Conversation

@isdanni
Copy link
Contributor

@isdanni isdanni commented Apr 28, 2023

Summary:

Fix: #99141

Reference:

if (reduction == Reduction::Mean) {
auto target_lengths_t = at::tensor(target_lengths, res.options());
return (res / target_lengths_t).mean();
} else if (reduction == Reduction::Sum) {
return res.sum();
}

Test Plan: See GitHub Tests.

Differential Revision: D45387774

@pytorch-bot
Copy link

pytorch-bot bot commented Apr 28, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/100235

Note: Links to docs will display an error until the docs builds have been completed.

✅ 1 Unrelated Failure

As of commit b731318:

UNSTABLE - The following job failed but was likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D45387774

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Copy link
Contributor

@mikaylagawarecki mikaylagawarecki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @isdanni

nit: could you also update the documentation for the shape of output per the comment on the linked issue please :)

@mikaylagawarecki mikaylagawarecki added release notes: nn release notes category topic: docs topic category labels Apr 28, 2023
@isdanni
Copy link
Contributor Author

isdanni commented Apr 28, 2023

Thanks @isdanni

nit: could you also update the documentation for the shape of output per the comment on the linked issue please :)

Updated. I think the original output doc scalar just meant scalar by default(a.k.a. if reduction is mean or sum) but it might be a bit unclear. I add a if statement to explain further :)

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D45387774

Copy link
Contributor

@mikaylagawarecki mikaylagawarecki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I missed that first scalar! Thank you so much for clarifying the docs! :)

@github-actions
Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Jun 27, 2023
@mikaylagawarecki
Copy link
Contributor

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased export-D45387774 onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout export-D45387774 && git pull --rebase)

@mikaylagawarecki
Copy link
Contributor

@pytorchbot merge -f "build failures are unrelated"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR has internal changes and must be landed via Phabricator

Details for Dev Infra team Raised by workflow job

Summary:
Pull Request resolved: pytorch#100235

Reference: https://github.com/pytorch/pytorch/blob/39b885cbbfc8c115069d49f5a6d27ea622bd05dc/aten/src/ATen/native/LossCTC.cpp#L366-L371

Test Plan: See GitHub Tests.

Differential Revision: D45387774

fbshipit-source-id: 8166f5ea0383312bce0df2d9ea53aed2edf551b4
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D45387774

@isdanni isdanni force-pushed the export-D45387774 branch from 5f849ad to b731318 Compare June 28, 2023 15:49
@isdanni
Copy link
Contributor Author

isdanni commented Jun 28, 2023

Thanks for merging this @mikaylagawarecki My other PR had a similar problem. Rebasing and export from Phabricator should do the trick.

@mikaylagawarecki
Copy link
Contributor

@pytorchbot merge -f "build failure is unrelated"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

The CTCLoss method doc does not specify what the sum option in reduction does

5 participants