Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

exponential_ few fixes (1) lambda > 0 (2) mkl kernel to continuous (3) better error log on dtype #92891

Conversation

min-jean-cho
Copy link
Collaborator

@min-jean-cho min-jean-cho commented Jan 24, 2023

Exponential distribution is continuous. Fixes CPU MKL exponential implementation to exclude integer dtypes.

import torch
dtypes = [torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64]

for dtype in dtypes:
    x = torch.empty(10000, dtype=dtype).exponential_() # should fail !
    print("dtype: ", x.dtype, "sum: ", x.sum())

Additional Context

Related to #92709. This issue propagates to OpInfo of exponential.

AssertionError: The supported dtypes for exponential on device type cpu are incorrect!
The following dtypes worked in forward but are not listed by the OpInfo: {torch.int64, torch.uint8, torch.int8, torch.int16, torch.int32}.

cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10

@pytorch-bot
Copy link

pytorch-bot bot commented Jan 24, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/92891

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 5 Pending

As of commit a30aa39:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions github-actions bot added the module: cpu CPU specific problem (e.g., perf, algorithm) label Jan 24, 2023
Copy link
Collaborator

@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you also add a TORCH_CHECK throwing a more informative error?

@min-jean-cho
Copy link
Collaborator Author

Thanks @lezcano, should we add such check for other exponential distribution implementations as well? Currently, the error is exponential_{device} not implemented for {dtype} for the default cpu, cuda as well as cpu mkl implementations.

@lezcano
Copy link
Collaborator

lezcano commented Jan 24, 2023

Yeah, that'd be great! If this was implemented properly, all the checks would be shared by all the implementations and then we would dispatch to the actual backend, but I guess this is not the case.

@min-jean-cho min-jean-cho changed the title fix CPU MKL exponential distribution to be continuous exponential_ few fixes (1) lambda > 0 (2) mkl kernel to continuous (3) better error log on dtype Jan 26, 2023
@min-jean-cho
Copy link
Collaborator Author

Thanks @lezcano, I've (1) added better error log on dtype (2) fixed lambda >= 0 to lambda > 0 as discussed in #93053. Please have a look, thanks!

@min-jean-cho
Copy link
Collaborator Author

@pytorchbot rebase -b master

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a rebase job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased minjean/mkl_exponential_continuous onto refs/remotes/origin/master, please pull locally before adding more changes (for example, via git checkout minjean/mkl_exponential_continuous && git pull --rebase)

@pytorchmergebot pytorchmergebot force-pushed the minjean/mkl_exponential_continuous branch from 81ad5e9 to 49e9c95 Compare January 27, 2023 18:08
@min-jean-cho
Copy link
Collaborator Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jan 27, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged module: cpu CPU specific problem (e.g., perf, algorithm) open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants