New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix][MXNet] Fix edge order and builtin max bug in mx #247

Merged
merged 3 commits into from Dec 5, 2018

Conversation

Projects
None yet
2 participants
@jermainewang
Copy link
Member

jermainewang commented Dec 5, 2018

Changes:

  • Fix the bug in src_mul_edge due to csr_matrix does not match with coo order.
  • Fix bug in builtin max due to different broadcasting behaviors between MX and TH.
  • Fix bug in in utils due to different behaviors for rank-0 tensor between MX and TH.
  • Fix bug in creating from scipy sparse matrix -- nodelist is not correctly provided.
@jermainewang

This comment has been minimized.

Copy link
Member Author

jermainewang commented Dec 5, 2018

Another bug in immutable graph index, which might need you guys efforts to fix @eric-haibin-lin @zheng-da

import os
os.environ['DGLBACKEND'] = 'mxnet'
import dgl

elist = [(1, 2), (0, 1), (0, 2)]
g = dgl.DGLGraph(elist, readonly=True)
print(g.edges(form='all'))
print(g.edge_id(0, 1))

elist = [(1, 2), (0, 1), (0, 2)]
g = dgl.DGLGraph(elist, readonly=False)
print(g.edges(form='all'))
print(g.edge_id(0, 1))

You can try the above example. The results are different and the second one is the correct output (also the same as the pytorch backend).

Show resolved Hide resolved python/dgl/backend/mxnet/tensor.py
@@ -73,7 +83,7 @@ def mean(input, dim):
return nd.mean(input, axis=dim)

def max(input, dim):
return nd.max(input, axis=dim).asnumpy()[0]
return nd.max(input, axis=dim)

This comment has been minimized.

@zheng-da

zheng-da Dec 5, 2018

Collaborator

i thought this need to return python scalar?

This comment has been minimized.

@jermainewang

jermainewang Dec 5, 2018

Author Member

Nop. This should the normal max that reduce along the given dim, similar to sum.

@@ -237,8 +237,8 @@ def build_relabel_map(x, sorted=False):
unique_x, _ = F.sort_1d(F.unique(x))
else:
unique_x = x
map_len = int(F.max(unique_x, dim=0)) + 1
old_to_new = F.zeros(map_len, dtype=F.int64, ctx=F.cpu())
map_len = int(F.asnumpy(F.max(unique_x, dim=0))) + 1

This comment has been minimized.

@zheng-da

zheng-da Dec 5, 2018

Collaborator

why do you prefer to use asnumpy here? what happens to pytorch?

This comment has been minimized.

@jermainewang

jermainewang Dec 5, 2018

Author Member

In torch, int(tensor([0])) will directly become 0. This is not valid for MX. Technically, int() should also be a backend function call. Changing it to numpy making it more consistent between the two backends.

This comment has been minimized.

@zheng-da

zheng-da Dec 5, 2018

Collaborator

so we should have mxnet ndarray to support int()? i think this is easy to do. i'll create an issue in the mxnet repo for this.

jermainewang and others added some commits Dec 5, 2018

@zheng-da zheng-da merged commit eafcb7e into master Dec 5, 2018

1 check passed

continuous-integration/jenkins/pr-merge This commit looks good
Details
@zheng-da

This comment has been minimized.

Copy link
Collaborator

zheng-da commented Dec 5, 2018

@jermainewang jermainewang deleted the fix-mx-spmv branch Dec 5, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment