-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mkl implementation for exponential on CPU #69967
Conversation
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow For more information, please take a look at the CI Flow Wiki. |
8074406
to
0dfca39
Compare
Hey @mruberry - not sure who the best person to review this internally was - please feel free to reassign as appropriate :) |
🔗 Helpful links
✅ No Failures (0 Pending)As of commit 54c8ef6 (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
0dfca39
to
f70c51f
Compare
b6c6ec9
to
ea43e5f
Compare
9fd25c2
to
a7abb44
Compare
f341218
to
f75ba5b
Compare
093d308
to
3bd3445
Compare
Perhaps the test author @ebsmothers can answer that? |
…nential implementation of PyTorch (#364) Summary: pytorch/pytorch#69967 adds a MKL implementation for exponential in PyTorch. Even the new implementation fits the exponential distribution but it is not the same as the old one. Change the expected values to accommodate the new exponential implementation of PyTorch. Pull Request resolved: #364 Reviewed By: pikapecan Differential Revision: D41430249 Pulled By: ebsmothers fbshipit-source-id: c3333fd7f4449881a239c6b0c26d22fd63009b68
3bd3445
to
92a3cf1
Compare
@malfet @izaitsevfb As suggested by @ebsmothers, we change the expected value to match the exact value from the new exponential implementationhttps://github.com/facebookresearch/multimodal/pull/364. Could you please check if this PR will cause internal breakage ? |
92a3cf1
to
dcf51c6
Compare
@izaitsevfb has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
I've manually imported this PR to run the internal tests. For Meta employees, see D41587318 |
@izaitsevfb you need to re-import again, as @CaoE just pushed several commits to the PR after the import. |
I think GH timeline is misleading, the commits were made 2 hours ago, before the import. I'll double check via the code. UPD: I believe we have the latest version imported. |
@CaoE, there are some failures, related to the values, expected by the internal models that I can't share specifics about. Please don't merge this yet. I'm trying to find a PoC that can provide more context. |
Thank you @izaitsevfb. May I know if these tests will be fixed for the new RNG in the near future? |
I think there are plans to rewrite the tests to not rely on RNG specifics, but I don't have the specifics. Please don't worry about that, as I've confirmed that the internal test failures are expected and can be forward-fixed. |
@izaitsevfb That's great ! Can we merge this PR first and then fix the tests? |
Yep, that's what I meant, sorry for not being clear. |
dcf51c6
to
5091610
Compare
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary: Due to a change in pytorch's random distribution calculation (see pytorch/pytorch#69967), `torch.multinomial` is producing different values for unit tests. `GenerationUtil` uses multinomial for autoregressive decoding and token prediction. The `test_sample` unit test now tests for output shape instead of exact tokens to mitigate sensitivities to random distributions. Pull Request resolved: #403 Test Plan: ``` python -m pytest -v tests/utils/test_generate.py tests/utils/test_generate.py::TestLogitsMask::test_normal PASSED [ 6%] tests/utils/test_generate.py::TestLogitsMask::test_zero_dims PASSED [ 13%] tests/utils/test_generate.py::TestLogitsMask::test_in_seq_only PASSED [ 20%] tests/utils/test_generate.py::TestLogitsMask::test_out_seq_only PASSED [ 26%] tests/utils/test_generate.py::TestGenerationUtil::test_model_eval_warning PASSED [ 33%] tests/utils/test_generate.py::TestGenerationUtil::test_sample PASSED [ 40%] tests/utils/test_generate.py::TestGenerationUtil::test_filter_logits PASSED [ 46%] tests/utils/test_generate.py::TestLogitsFilterTopK::test_min_tokens_to_keep PASSED [ 53%] tests/utils/test_generate.py::TestLogitsFilterTopK::test_top_k_invalid PASSED [ 60%] tests/utils/test_generate.py::TestLogitsFilterTopK::test_default PASSED [ 66%] tests/utils/test_generate.py::TestLogitsFilterTopK::test_top_k PASSED [ 73%] tests/utils/test_generate.py::TestLogitsFilterTopP::test_min_tokens_to_keep PASSED [ 80%] tests/utils/test_generate.py::TestLogitsFilterTopP::test_top_p_invalid PASSED [ 86%] tests/utils/test_generate.py::TestLogitsFilterTopP::test_default PASSED [ 93%] tests/utils/test_generate.py::TestLogitsFilterTopP::test_top_p PASSED [100%] ``` Reviewed By: pbontrager Differential Revision: D43110663 Pulled By: RdoubleA fbshipit-source-id: d6c8665cb1ea102352e09c4557ff6982ffa40f84
Description
Add mkl implementation for exponential on CPU to improve the performance of exponential.
Testing
data type: float32
single socket (28cores):
single core:
cc @VitalyFedyunin @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10