Skip to content

Fix a bug where from_dlpack fails if cuda is not initialized. #4182

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 15, 2017

Conversation

zdevito
Copy link
Contributor

@zdevito zdevito commented Dec 14, 2017

No description provided.

@zdevito zdevito force-pushed the pr/explicit-init-function branch from a3a0219 to c8ca4d5 Compare December 14, 2017 23:03
// so if we have a cuda tensor, then we need to make sure
// we have called _lazy_init here
if(atensor.is_cuda()) {
py::module::import("torch.cuda").attr("init")();

This comment was marked as off-topic.

def test_dlpack_cuda(self):
x = torch.randn(1, 2, 3, 4).cuda()
z = from_dlpack(to_dlpack(x))
self.assertEqual(z, x)

This comment was marked as off-topic.

This comment was marked as off-topic.

@ezyang ezyang merged commit d8c5f2a into master Dec 15, 2017
@ezyang ezyang deleted the pr/explicit-init-function branch February 2, 2018 03:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants