Skip to content

torch.special.spherical_bessel_j0 #78912

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

0x00b1
Copy link
Contributor

@0x00b1 0x00b1 commented Jun 6, 2022

spherical_bessel_j0(input, *, out=None) -> Tensor

Spherical Bessel function of the first kind of order $0$.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jun 6, 2022

🔗 Helpful links

❌ 1 New Failures

As of commit 1c6de27 (more details on the Dr. CI page):

Expand to see more
  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build trunk / linux-bionic-py3.7-clang9-slow / test (slow, 1, 1, linux.2xlarge) (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-06-27T16:53:11.7240675Z FAIL [1.054s]: test_fs_sharing (__main__.TestMultiprocessing)
2022-06-27T16:53:11.7199940Z   test_is_shared (__main__.TestMultiprocessing) ... skip: test is fast; we disabled it with PYTORCH_TEST_SKIP_FAST (0.000s)
2022-06-27T16:53:11.7203609Z   test_is_shared_cuda (__main__.TestMultiprocessing) ... skip: CUDA not available (0.000s)
2022-06-27T16:53:11.7213522Z   test_leaf_variable_sharing (__main__.TestMultiprocessing) ... skip: test is fast; we disabled it with PYTORCH_TEST_SKIP_FAST (0.001s)
2022-06-27T16:53:11.7216685Z   test_mixed_types_cuda_sharing (__main__.TestMultiprocessing) ... skip: CUDA IPC not available (0.000s)
2022-06-27T16:53:11.7223407Z   test_non_leaf_variable_sharing (__main__.TestMultiprocessing) ... skip: test is fast; we disabled it with PYTORCH_TEST_SKIP_FAST (0.001s)
2022-06-27T16:53:11.7228044Z   test_parameter_sharing (__main__.TestMultiprocessing) ... skip: test is fast; we disabled it with PYTORCH_TEST_SKIP_FAST (0.000s)
2022-06-27T16:53:11.7232980Z   test_variable_sharing (__main__.TestMultiprocessing) ... skip: test is fast; we disabled it with PYTORCH_TEST_SKIP_FAST (0.000s)
2022-06-27T16:53:11.7239385Z   test_wrong_cuda_fork (__main__.TestMultiprocessing) ... skip: CUDA not available (0.001s)
2022-06-27T16:53:11.7239717Z 
2022-06-27T16:53:11.7240197Z ======================================================================
2022-06-27T16:53:11.7240675Z FAIL [1.054s]: test_fs_sharing (__main__.TestMultiprocessing)
2022-06-27T16:53:11.7241309Z ----------------------------------------------------------------------
2022-06-27T16:53:11.7241719Z Traceback (most recent call last):
2022-06-27T16:53:11.7242123Z   File "test_multiprocessing.py", line 347, in test_fs_sharing
2022-06-27T16:53:11.7242370Z     self._test_sharing(repeat=TEST_REPEATS)
2022-06-27T16:53:11.7242630Z   File "test_multiprocessing.py", line 289, in _test_sharing
2022-06-27T16:53:11.7242850Z     test_receive()
2022-06-27T16:53:11.7243052Z   File "test_multiprocessing.py", line 206, in __exit__
2022-06-27T16:53:11.7243307Z     self.test_case.assertFalse(self.has_shm_files())
2022-06-27T16:53:11.7243538Z AssertionError: True is not false
2022-06-27T16:53:11.7243663Z 

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@0x00b1 0x00b1 changed the title torch.special.spherical_j0 torch.special.spherical_bessel_j0 Jun 6, 2022
@0x00b1 0x00b1 force-pushed the special_functions/spherical_bessel_j0 branch from be4c4ad to c23ecc1 Compare June 6, 2022 02:58
@0x00b1 0x00b1 marked this pull request as ready for review June 6, 2022 19:17
@albanD albanD removed their request for review June 6, 2022 21:32
@0x00b1 0x00b1 force-pushed the special_functions/spherical_bessel_j0 branch from ee1e9d9 to 7b5f250 Compare June 7, 2022 19:48
@0x00b1 0x00b1 added the ciflow/trunk Trigger trunk jobs on your pull request label Jun 7, 2022
@soulitzer soulitzer removed their request for review June 7, 2022 21:06
@0x00b1 0x00b1 force-pushed the special_functions/spherical_bessel_j0 branch from 1ac2bc4 to 727b4f3 Compare June 22, 2022 15:03
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! Just needs a rebase

@0x00b1
Copy link
Contributor Author

0x00b1 commented Jun 27, 2022

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

@github-actions
Copy link
Contributor

Hey @0x00b1.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request Jun 30, 2022
Summary:
```Python
spherical_bessel_j0(input, *, out=None) -> Tensor
```

Spherical Bessel function of the first kind of order $0$.

Pull Request resolved: #78912
Approved by: https://github.com/mruberry

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/ab8797d69bf282c709438d01598ed730068a6d51

Reviewed By: b0noI

Differential Revision: D37509630

Pulled By: 0x00b1

fbshipit-source-id: 37598dcb55b3ec3a5f08fd6f2f80b887521a1025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request cla signed Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants