Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Graph]To(device) should return updated object #1057

Merged
merged 5 commits into from Nov 28, 2019
Merged

Conversation

@tczhangzhi
Copy link
Contributor

tczhangzhi commented Nov 28, 2019

Description

When using pytorch, users are used to assigning the return value of to () to a new variable:

# torch/nn/modules/module.py
def to(...):
    ...
    return self._apply(convert)
# we usually do this:
# new_tensor = tensor.to(device)

DGL's to behavior is different from pytorch, i.e. return None, which will cause bugs.

Related Issues/Forum Threads

Checklist

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [$CATEGORY] (such as [NN], [Model], [Doc], [Feature]])
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage
  • Code is well-documented
  • To the my best knowledge, examples are either not affected by this change,
    or have been fixed to be compatible with this change
  • Related issue is referred in this PR

Changes

  • Modified the to function so that it returns itself
tczhangzhi added 3 commits Nov 28, 2019
@mufeili mufeili self-requested a review Nov 28, 2019
@mufeili

This comment has been minimized.

Copy link
Member

mufeili commented Nov 28, 2019

Thank you for your contribution and I agree this is a nice behavior to have.

It would be great if you can also:

  1. Add descriptions for returns at L3341, DGLGraph and L3577, DGLHeteroGraph.
  2. Add a test for DGLHeteroGraph.to in the test for DGLHeteroGraph.
tczhangzhi added 2 commits Nov 28, 2019
@tczhangzhi

This comment has been minimized.

Copy link
Contributor Author

tczhangzhi commented Nov 28, 2019

My test case has passed, but changes by others cause test_all_binary_builtins to fail the test:

======================================================================
FAIL: test_kernel.test_all_binary_builtins
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/nose/case.py", line 198, in runTest
    self.test(*self.arg)
  File "/var/jenkins_home/workspace/DGL_PR-1057@3/tests/compute/test_kernel.py", line 348, in test_all_binary_builtins
    broadcast=broadcast)
  File "/var/jenkins_home/workspace/DGL_PR-1057@3/tests/compute/test_kernel.py", line 317, in _test
    assert(F.allclose(lhs_grad_1, lhs_grad_2, rtol, atol))
AssertionError: 
-------------------- >> begin captured stdout << ---------------------
left grad
ERROR: Test e_add_u_min broadcast: u partial: False
@1138 1.0 v.s. 0.0
--------------------- >> end captured stdout << ----------------------
----------------------------------------------------------------------
XML: /var/jenkins_home/workspace/DGL_PR-1057@3/nosetests.xml
----------------------------------------------------------------------
@VoVAllen

This comment has been minimized.

Copy link
Collaborator

VoVAllen commented Nov 28, 2019

Please ignore this error. I just retriggered the testing system

Copy link
Member

mufeili left a comment

LGTM. Could you please make an empty commit to trigger the CI?

@VoVAllen

This comment has been minimized.

Copy link
Collaborator

VoVAllen commented Nov 28, 2019

I manually retriggered CI

@mufeili mufeili merged commit 4f60631 into dmlc:master Nov 28, 2019
1 check passed
1 check passed
continuous-integration/jenkins/pr-merge This commit looks good
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.