Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.random_ not implemented for cuda tensors #712

Closed
gregjohnso opened this issue Feb 9, 2017 · 1 comment
Closed

.random_ not implemented for cuda tensors #712

gregjohnso opened this issue Feb 9, 2017 · 1 comment

Comments

@gregjohnso
Copy link

gregjohnso commented Feb 9, 2017

torch.Tensor(10).random_(0,10) #works
torch.Tensor(10).random_(0,10).cuda() #works
torch.Tensor(10).cuda().random_(0,10) #does not work

colesbury pushed a commit to colesbury/pytorch that referenced this issue Feb 28, 2017
Fix sampleMultinomialOnce to better handle large distribution values
@soumith soumith added the 5hr label Apr 18, 2017
@soumith soumith added this to Medium Priority in Issue Status Aug 23, 2017
@soumith soumith added this to nn / autograd / torch in Issue Categories Sep 13, 2017
@soumith soumith moved this from nn / autograd / torch to torch /autograd in Issue Categories Sep 20, 2017
@soumith
Copy link
Member

soumith commented Oct 11, 2017

fixed via #3042

@soumith soumith closed this as completed Oct 11, 2017
@soumith soumith removed this from torch /autograd in Issue Categories Oct 13, 2017
bddppq pushed a commit to bddppq/pytorch that referenced this issue Apr 17, 2018
…9c90c8

Previous import was a4dcc47791eb127652f5aaddd51d8896d446a067

Included changes:
- **[985af3f](onnx/onnx@985af3f)**: Update PythonAPIOverview.md (pytorch#738) <Dmytro Dzhulgakov>
- **[b69be33](onnx/onnx@b69be33)**: Add backend test for upsample (pytorch#729) <Sebastian Meßmer>
- **[0d9496e](onnx/onnx@0d9496e)**: Input test data of concat op should be float (pytorch#711) <Changming Sun>
- **[20bcb8b](onnx/onnx@20bcb8b)**: Fix the spec for batchnorm and instancenorm (pytorch#733) <Lu Fang>
- **[c9f825f](onnx/onnx@c9f825f)**: Refine a little bit about op spec. (pytorch#666) <Ke Zhang>
- **[a484eb2](onnx/onnx@a484eb2)**: Fix an error in Conv doc (pytorch#731) <Lu Fang>
- **[7410cc4](onnx/onnx@7410cc4)**: Fix incorrect package output paths (pytorch#730) <bddppq>
- **[be546e2](onnx/onnx@be546e2)**: Improve optimizer's API and docs (pytorch#713) <Lu Fang>
- **[c61506f](onnx/onnx@c61506f)**: Fix the shape inference python API (pytorch#716) <Lu Fang>
- **[e9d4134](onnx/onnx@e9d4134)**: Fix cmake on windows when not building python extension (pytorch#728) <bddppq>
- **[72187aa](onnx/onnx@72187aa)**: Add value_info support in make_graph (pytorch#726) <Lu Fang>
- **[67b7d89](onnx/onnx@67b7d89)**: Fix gen_proto in cmake (pytorch#719) <bddppq>
- **[fcb4ae3](onnx/onnx@fcb4ae3)**: docs rewording: Important Python Functions -> Python API Overview (pytorch#721) <anderspapitto>
- **[24275d6](onnx/onnx@24275d6)**: Ignore .eggs directory when doing lint (pytorch#722) <bddppq>
- **[54be8fa](onnx/onnx@54be8fa)**: Use cmake3 if it's available (pytorch#718) <bddppq>
- **[b8c4238](onnx/onnx@b8c4238)**: Add python function docs (pytorch#714) <Lu Fang>
- **[e177493](onnx/onnx@e177493)**: Remove unused cmake utils (pytorch#712) <bddppq>
- **[72d6ad6](onnx/onnx@72d6ad6)**: Remove pycmd from CMake (pytorch#710) <bddppq>
- **[93f0d40](onnx/onnx@93f0d40)**: Fix windows local build (pytorch#709) <Raymond Yang>
- **[6734224](onnx/onnx@6734224)**: CMake fixes and setup.py cleanup (pytorch#706) <bddppq>
- **[7f6a4fd](onnx/onnx@7f6a4fd)**: Add docs to explain important functions in ONNX Infra (pytorch#682) <Lu Fang>
- **[f0f6b3d](onnx/onnx@f0f6b3d)**: fix hardmax test cases make output dtype same as input (pytorch#705) <Wenhao Hu>
- **[c970f0c](onnx/onnx@c970f0c)**: Fix the Dummy backend (pytorch#701) <Lu Fang>
- **[2af45df](onnx/onnx@2af45df)**: setup.py uses cmake build system (pytorch#606) <anderspapitto>
- **[dfcaade](onnx/onnx@dfcaade)**: clean up unused variable left by removing consumed_input (pytorch#697) <bddppq>
- **[accfc74](onnx/onnx@accfc74)**: Remove incorrect backend test (pytorch#700) <Lu Fang>
- **[e558732](onnx/onnx@e558732)**: add max inclusive version to defs.get_schema function (pytorch#695) <Wenhao Hu>
- **[16f02eb](onnx/onnx@16f02eb)**: add API to add domain to min/max version for extension. (pytorch#694) <Ke Zhang>
- **[3e560dd](onnx/onnx@3e560dd)**: Fix doc for initializer (pytorch#690) <bddppq>
- **[6cc4f53](onnx/onnx@6cc4f53)**: Add model save function (pytorch#692) <Lu Fang>
- **[21eaf9b](onnx/onnx@21eaf9b)**: Changing the string discussing versions in operator specifications. (pytorch#691) <Niklas Gustafsson>
- **[3b0cdf4](onnx/onnx@3b0cdf4)**: Minor code quality improvements in optimizer/ (pytorch#612) <Sebastian Meßmer>
- **[641f126](onnx/onnx@641f126)**: Fix Gemm doc wording (pytorch#689) <bddppq>
- **[4a0ec75](onnx/onnx@4a0ec75)**: Clarifies installation error message when external protobuf dependencies are missing (pytorch#684) <Daniel J. H>
- **[960a2c3](onnx/onnx@960a2c3)**: Check outputs dtype in backend tests (pytorch#567) <bddppq>
- **[1d7dee4](onnx/onnx@1d7dee4)**: Fix Average pool test cases converted from PyTorch (pytorch#677) <Lu Fang>
- **[36d7fff](onnx/onnx@36d7fff)**: Fix Attribute default value pybind11 binding (pytorch#671) <bddppq>
- **[0536866](onnx/onnx@0536866)**: git ignore .pytest_cache (pytorch#674) <bddppq>
- **[afc84ac](onnx/onnx@afc84ac)**: Update README.md (pytorch#672) <Dmytro Dzhulgakov>
- **[9d2b530](onnx/onnx@9d2b530)**: Revert "[Typing 1/3] Setup mypy type checker (pytorch#607)" (pytorch#667) <bddppq>
- **[086727e](onnx/onnx@086727e)**: [Typing 1/3] Setup mypy type checker (pytorch#607) <Sebastian Meßmer>
- **[5716e20](onnx/onnx@5716e20)**: Convert all Node tests to Model tests (pytorch#651) <bddppq>
- **[6fe932a](onnx/onnx@6fe932a)**: Replace unittest.skip with custom exception (pytorch#659) <Dmytro Dzhulgakov>
- **[ecac1c1](onnx/onnx@ecac1c1)**: Merge Rel 1.1.0 branch into master (pytorch#657) <Anirudh>
- **[5cb999d](onnx/onnx@5cb999d)**: Minor cleanups to shape inference (pytorch#653) <anderspapitto>
- **[f4acf28](onnx/onnx@f4acf28)**: Remove allowconsumed enforceconsumed from op schema. (pytorch#617) <Ke Zhang>
- **[a8e4648](onnx/onnx@a8e4648)**: Adjust link flags when built in Windows Debug mode (pytorch#647) <Yinghai Lu>
- **[7c009fe](onnx/onnx@7c009fe)**: Fix lint error in optimizer test (pytorch#656) <bddppq>
- **[063d12f](onnx/onnx@063d12f)**: Fix optimizer split pass for models with constant output (pytorch#652) <bddppq>
mcarilli pushed a commit to mcarilli/pytorch that referenced this issue Mar 18, 2021
… loop (pytorch#712)

* fix reduce semantics; add sync to block in loop

* comment

* format

* clang-tidy

* merge the syncthread into block kernel

* reverted commented test check

* add syncthread at the end of blockbroadcast
KyleCZH pushed a commit to KyleCZH/pytorch that referenced this issue Sep 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Issue Status
Medium Priority
Development

No branches or pull requests

3 participants