-
Notifications
You must be signed in to change notification settings - Fork 7.2k
[FBcode->GH] Fix some tests in fbcode #3686
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…_cuda (pytorch#3675) Summary: Pull Request resolved: pytorch#3675 This test is consistently failing or being skipped / omitted: https://www.internalfb.com/intern/test/562949978742689?ref_report_id=0 Some models are known to be flaky with autocast so we just ignore the check, as with other models Reviewed By: fmassa Differential Revision: D27791576 fbshipit-source-id: b7c85e4d67143bcc3cf4b5da0150a6dd6fd12298
Summary: Pull Request resolved: pytorch#3676 The test is constantly failing: https://www.internalfb.com/intern/test/562949982577806?ref_report_id=0 The fix just adjusts `atol` from 1e-8 to 1e-7. The equality test was likely failing on exact zeros Reviewed By: fmassa Differential Revision: D27790959 fbshipit-source-id: 58d06250df5905e39e197ee946ee2d875a5bab76
Summary: Pull Request resolved: pytorch#3677 This test is broken: https://www.internalfb.com/intern/test/281475006043433?ref_report_id=0 This diff fixes the test on CUDA devices by adjusting the tolerance, as was previously done for this same test Reviewed By: fmassa Differential Revision: D27792082 fbshipit-source-id: b336fb68fb72a5a80136efd5c2d3c9d0e1d4f604
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Please make sure that the commit message contains the [FBcode->GH]
prefix before merging the PR
@datumbox do you foresee any potential issue with squeezing the 4 commits into one as done here, instead of opening one PR for each of them? Otherwise I'll merge :) |
Summary: Pull Request resolved: pytorch#3687 Test is broken: https://www.internalfb.com/intern/test/844424959297016?ref_report_id=0 The issue is that very few pixels may differ between the scripted version and the regular version for float16. As far as I can tell, those discrepancy can be quite large but they happen on very few pixels (less than .1%). Also, they seem to appear on pixels that have at least one coordinate in common with the startpoints or endoints. A wild guess of mine would be that the pixel is black on one image (or whatever the background is) and a no-background pixel on the other, hence the large difference in values, but the actual source of difference may just be a minor floating difference. Since the check is already quite robust and the equivalence between scripted and regular is already tested for non-batched entries, we simply avoid the check. Reviewed By: fmassa Differential Revision: D27794284 fbshipit-source-id: fd04cf9d9fb5ce092a42cc424f6b74b379ed5a3d
@NicolasHug No major problem. Squashing used to be the way we synched the repos in the past anyway. Ideally when possible it would be good to bring them as individual PRs. It's more of a nice to have for tracking and managing reasons. Should not block merging this one IMO. :) |
Cherry-picking of #3675, #3676 #3677 and #3687