-
Notifications
You must be signed in to change notification settings - Fork 25k
[special] add zeta #59623
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[special] add zeta #59623
Conversation
💊 CI failures summary and remediationsAs of commit a5aa475 (more details on the Dr. CI page and at hud.pytorch.org/pr/59623):
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
Job | Step | Action |
---|---|---|
Run tests | 🔁 rerun |
1 job timed out:
pytorch_linux_xenial_cuda11_1_cudnn8_py3_gcc7_build
This comment was automatically generated by Dr. CI (expand for details).
Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group.
rtol, atol = None, None | ||
if self.device_type == 'cpu': | ||
rtol, atol = 1e-6, 1e-6 | ||
self.assertEqual(expected, actual, rtol=rtol, atol=atol, exact_dtype=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had to add tolerance based on CI failures (for jobs with relatively old compilers).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @pmeier for yet another example of binary elementwise testing we should automate
supports_autograd=False, | ||
safe_casts_outputs=True, | ||
sample_inputs_func=sample_inputs_binary_pwise), | ||
# OpInfo entry to verify the gradient formula of `other`/`q` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Smart.
One goal of our "writing ops in Python" project (which we're looking at for H2 2021) is to simplify supporting scalar x tensor and tensor x scalar operations, possibly by automating the generation of those functions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another great, thorough, careful PR @kshitij12345!
@kshitij12345 would you just merge this and ping me? |
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
@mruberry Done :) |
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Reference #50345
zeta
was already present in the codebase to support computation ofpolygamma
.However,
zeta
only haddouble(double, double)
signature for CPU before the PR (which meant that computationpolygamma
were always upcasted todouble
for zeta part).With this PR, float computations will take place in float and double in double.
Have also refactored the code and moved the duplicate code from
Math.cuh
toMath.h
Note: For scipy, q is optional, and if it is
None
, it defaults1
which corresponds to Reimann-Zeta. However, fortorch.specia.zeta
, I made it mandatory cause for me it feels odd withoutq
this is Reimann-Zeta and withq
it is the general Hurwitz Zeta. I think sticking to just general made more sense as passing1
for q sounds trivial.Verify: